Saturday, March 14, 2009

Arguing on the Internet: The Happiness Machine (part 1)

I recently read a post over on Daylight Atheism concerning the thought experiment of the happiness machine. I dig thought experiments, especially ones that primarily concern ethics, and the happiness machine is no exception. I do, however, have a few things to say about Ebonmuse's reasoning. A good deal of this has been covered in the comments, but there are a couple key points that I did not see raised in my cursory read, so I want to go over those today.

First, I want to say that I don't disagree with anything he says - so far as his argument goes, I'm more or less with him. However, there are some issues that he doesn't address either in the main post or in his responses to comments; this is the scope I intend to establish for this particular post. I'll be digressing a bit at the end for a lead-up to the following post, which will concern what empirical observation tells us when juxtaposed with our intuitions, and what that means for our "moral sense."

My only real beef, from which all of the following proceeds, is that thought experiments have an introspective diagnostic quality to them, which seems to have been ignored. Because a thought experiment is carried out by your mental apparatus, with your mental apparatus, on your mental apparatus, the "experiment" has nothing to do with the world itself, and everything to do with your mental apparatus. Thought experiments don't tell us things about the world, they help us to illuminate our own thought processes - provided that two key conditions are met:
  1. The subject is honest.
  2. The subject takes the thought experiment seriously.
Of course, there is an exception: when the thought experiment is done on multiple people, it can actually tell you about the world (specifically, about the people in it). But one person doing a thought experiment is really just introspecting fancily.

As one commenter (called "penn") points out, the happiness machine isn't a real machine in this world, but rather a perfect hypothetical machine in a hypothetical world which only differs from ours in that said machine exists. The question (as clarified by penn) is not, "Would you use a happiness machine if I offered you one," but rather, "Would you accept a life of unvaried but certain bliss, at the expense of your current life of varied but uncertain experience?" That is the crucial bit of the thought experiment, and that is what I would wager most people find repugnant about the happiness machine. People tend to crave variety and meaning in their lives, not mere happiness. To paraphrase Roger Williams, author of The Metamorphosis of Prime Intellect, a novella which addresses (in part) just such a happiness machine and its philosophical ramifications, it's not about how many hedons you've accumulated at the end, it's about whether you have accomplished something (source).

Ebonmuse's answer to penn's question concludes with, "Sure, maybe using the Happiness Machine is the best course of action in that world. But I think that world is so utterly and completely different from our own that we shouldn't have any confidence that conclusions that are drawn there can be ported over to our world." This misses the point of the question, to my mind. The point is not whether the thought experiment, as laid out, is possible to carry out in (or carry over to) our world; the point is to draw out just how committed the subject of the thought experiment is to certain principles when those principles are pitted against each other. In other words, the question, "Would you accept a life of unvaried but certain bliss," is one member of a set of questions meant to determine the answer to the "larger" question, "Would you maximize happiness at the expense of everything else (except the necessary preconditions for life itself)?"

Some might say yes. But I imagine that a lot of people would say no, or would try to qualify a "yes" with things that they think might give their life more meaning than simply being locked in a vat hooked up to a brain-poker (basically, for the same reasons that people like me turn down heroin: I'm afraid that it's going to be too good). For instance, some might say they'd do it if it were more like being hooked up to the Matrix, with simulated experiences providing variable "background noise" to the unchanging state of contentment. Others might insist that they'd want it to be a neural implant, a pleasure-dispensing chip so that they can go through real life "unimpeded." But it doesn't matter how much we change the world to make the happiness machine possible if we're not altering the architecture of our own brains.

The problem is, the above qualifiers do not change the fundamental part which has been shown experimentally to override all other concerns: the brain's reward pathways, once tapped in such a way, simply drown out everything else. It doesn't matter if it's a chip, a program, or a hermetically sealed pod - experiencing unconditional pleasure breaks brains, plain and simple. It doesn't matter whether you're able in principle to move around and interact with the world or not, because once you're wired in, all other motivations will be drowned out except those that most efficiently keep the good times rolling. And when the happiness machine makes it so that you don't need to do anything to maintain the perfect high, well, in all likelihood you simply wouldn't do anything.

This is getting a bit lengthy, so I'm going to stop here for the moment and pick this up later. The point, so far, has been more or less that to most people, "happiness" in and of itself is not really all they want in the world - or so they think. In other words, most people think that they have more standards of value than the mere feeling of happiness (while research shows that if happiness is supplied in unlimited quantity, all other concerns take a back seat). Part of what's making this take so long (for me) is that I'm poking around the internet to find research on that "or so they think" part (I need to back up that last paragraph, after all), and it's leading me down some rabbit holes. I'll wrap this up in the next post, concerning the aforementioned research as well as what the thought experiment says about our concept of morality.

2 comments:

K.Greybe said...

I'm going to say I have to agree with you on this one: I had a similar response to the post myself.

The thing is this is a common problem when arguing in general. I find a particular problem comes up when trying to argue about politicial principles such as my unashamed liberalism: the libertarian for example just says all politicians are corrupt which of course ignores the question of whether or not my point is accurate or desirable in principle.

D said...

Hey, thanks for the comment! Yes, "talking past each other" happens a lot when arguing technical philosophy - at least, it does in my experience. As I said, I have no substantial disagreement with Ebonmuse, I just view the problem with a different focus which he does not discuss due to his own focus. This difference in emphasis results in a rather radical departure in our thinking on the topic, though, which is substantial. Well, to me, anyway.