Tuesday, May 5, 2009

A Little Taste of Heaven: The Happiness Machine (part 2)

About a month and a half ago, I started writing something and then put it on hold to try and track down some research. I have been unable to find that research, but I still have some things to say without it. First, I want to briefly discuss that research I couldn't find (my Google-Fu must be weak, please forgive me), and then I'll get to my objections to the Happiness Machine.

There was a research study done on a mother chimpanzee, where she was separated from her child by a partition through which she was able to see. The mother was only able to interact with her child by pressing a button which would dispense a meal-sized portion of food to it; she had access to another button which would inject a small amount of heroin into her system. During a certain interval, five minutes or so, the mother chimpanzee could only press one of the two buttons. For a while, the mother behaved responsibly enough, regularly feeding her child and getting a fix whenever she wanted; but over time, the mother pressed the heroin button more and more, eventually neglecting her child entirely. I'm not sure exactly what the interval was, nor how long it took the mother to succumb to addiction. I'm actually unclear on a lot of the details, as I heard about the study through verbal conversation with another person, which is why I wanted to look it up. The point, though, was that the experience of an intense enough bliss will change the subject, and I meant to use the study to show that no matter how many safeguards we stipulate into the thought experiment, something about human psychology gets somehow fucked up by such an experience. Can we separate the experience of bliss from the contingent facts of human psychology? I mean to show that we can not, by way of a brief digression about Heaven.

In Ebonmuse's original post on the Happiness Machine, he cites a post by Lynet of Elliptica entitled Challenging the Paramounce of Happiness., where she concludes that the Orgasmotron is functionally an end state like death - and Lynet doesn't want to die, even if she gets to go to Heaven. Ultimately, I suppose my objection boils down to the contrapositive of Lynet's: I don't want to go to Heaven, even if I get to die at the end.

When I lost my faith, one of the toughest things for me to let go of was the idea of Heaven, the ultimate happy ending to life, the Universe, and everything. I was finally able to do so only by reasoning that Heaven is simply Church Forever, which I find boring - I simply don't want to go to Heaven. But since it's stipulated that we'll be happy in Heaven, the person that I would be in Heaven would not find Church Forever to be nearly such a boring thing. But that future version of me is so fundamentally different from the current me that I know and love, that I consider it an entirely different person about whom I neither know nor care. Similarly, for me to be OK with plugging into a Happiness Machine, I would have to be a much different person - so much so that I think I would no longer be recognizable to my current self as "me." What would be so different? Well, for one thing, the me that would be OK with a Happiness Machine would be, by stipulation, satisfied with unconditional pleasure. I am not satisfied with the idea of unconditional pleasure at present.

I want to be happy, sure, but I want to be happy because of things that I do, and I want my happiness to be based on a sense of accomplishment rather than simply something I experience "just because." I am aware that this means I am consigning myself to the very real possibility that I will not experience as much happiness as I possibly could, but I am comfortable with that risk for the same reason that I play video games on difficulties higher than "noob-sauce" - specifically, I don't merely want to win at the game, I want to overcome a challenge. I don't merely want to "win at happiness," I want to overcome challenges to obtain it. Sure, if happiness is good, then this may mean that I'm not the most moral person in the world - but morality is, for me, only one concern among many.

I am also aware that a Happiness Machine, by the very stipulations of the thought experiment, would overcome these objections and satisfy these other desires in me. In a way, that is the essence of my objection: I do not wish to come under such an overpowering influence. Maybe it's some irrational rebellious streak in me, but I don't want to be subjected to something to which I won't ever want to say "no." These two facts - that I could not be unhappy with the Happiness Machine by stipulation, and that I don't want to be unconditionally happy with the Happiness Machine - cannot be reconciled. Experiencing the effects of the Happiness Machine, at least for me, is not something that can be done independent of my existing psychology - I would have to be changed by the experience, and in a way that I consider to be undesirable.

I'm going to switch gears here and "ruin" the thought experiment with some principled realism. We know what "too much happiness" can do to a mind, by way of subverting or overriding existing drives - we have both clean and dirty evidence from which to draw what conclusions we will. But let's suppose, for a moment, that we are not talking about a "Happiness Machine," but instead a "Square Circle Machine," and the thought experiment goes, "Suppose a machine were invented that was capable of creating square circles." While a naive interpretation may hold that the laws of logic may somehow be changed to allow such a thing, anyone who "really understands" geometry will know that we have departed from the logically possible in a very real sense. Similarly, I would argue that penn's Happiness Machine 2.0 is in some way a Square Circle Machine - if we "really understand" how minds work, then we know that any mind presented with unmitigated happines (however "pure" the form) will be unable to turn it down in the future. I maintain that not just our existing drives, but any drives at all, are incompatible with "the experience of pure bliss" and "the pursuit of other ends." If we experience pure bliss, we simply won't be able to care about other ends; if we stipulate ourselves into being entities which are capable of reconciling these influences, then we're not talking about us any longer and the thought experiment fails - we just aren't the droids we're looking for.

So there's my objection to the Happiness Machine. It's not a defeater, but it's not supposed to be; I'm simply saying that I don't want to plug in. Other people with different values may disagree, and that's OK.

I do want to finish with a note on ethics in general, though, because Lynet's post mentions Aristotle and I think that Aristotle himself would have a better objection than "the purpose of Man is to reason, the Happiness Machine prevents that purpose from being fulfilled, so the Happiness Machine is bad, The End." Both Kant's Categorical Imperative and Utilitarianism fail to provide any intrinsic resistance or principled objection to penn's Happiness Machine 2.0, and this is where Aristotle comes in. Nicomachean ethics provides a principled objection, in short: that's not the sort of thing that good people do. At length, the Aristotelian mean dictates that pursuit of pleasure should not be a maximizing thing, but should be appropriate to an otherwise full human life (in a way, this is a more robust version of Lynet's Aristotelian objection). In a nearly Buddhist fashion, a virtue ethicist would take the tack that happiness ought to be something obtained by conditional means, not reduced to a switch that one only need set to "on" and then receive in unlimited supply. Just as one may be too courageous, too righteous, too throughtful, too emotional, too pragmatic, or too cunning, so one may also be too happy. I believe that this shows, at one stroke, both how our moral intuitions are outgrowths of emotion-based value judgments, and how virtue ethics can capture our moral intuitions in ways that deontology and consequentialism cannot.

It doesn't matter if the Happiness Machine is non-addictive, or even "resets" our brains against habituation - it's still an end state, a final destination, a stagnating "and they lived happily ever after" ending. It makes you as happy as possible for as long as possible, and if happiness is your ultimate goal, then what more could you possibly want in life? But some people don't want "just happiness," some people want different things, like "meaning." But what's the meaning of "meaning," when stacked up against perpetual bliss? That depends on how we define "meaning" - and because this post is getting a bit lengthy as well, I'm going to cut to the chase and just say that "meaning" means "having an alternative available." Material wealth has meaning only in relation to an economy of scarcity - if everyone has all they want, then what's the point in having more than your neighbors? Physical fitness only has meaning when there are fat slobs lounging about - if everyone had perfect bodies, then a stellar physique would cease to distinguish one from one's peers as an attractive mate. Life has meaning only so long as death is an option - in Heaven, one's continued existence is an irrevocable given, and so the absence of the spectre of death removes all urgency from life. And if happiness can be obtained unconditionally, as penn's Happiness Machine 2.0 clearly makes possible, then the actions and objects that can grant us conditional happiness are robbed of all meaning.

Now, I can't stress enough here that I am a staunch proponent of freedom of conscience and individual self-determinacy, and thus I think it should be left to the individual to decide what experiences they wish to have - provided that they are adequately informed, of course. If one is aware of the risks of heroin, cocaine, PCP, mescaline, DMT, skydiving, bungee jumping, kayaking, drag racing, travelling abroad, mountain climbing, playing tennis, or any of a variety of things up to and including a fully-functional Happiness Machine 2.0, that is to my mind their call to make. I would fight for Happiness Machine legalization, I might even be willing to do research and development for it - but I wouldn't use it. My point with this second post is simply to show that it is possible to make a principled and rational objection to using the Happiness Machine upon oneself, by virtue of the fact that happiness is for many people simply one goal among many - and the fact that, once used, the Happiness Machine likely cannot be turned down at any future point, indicates that the principled and rational choice for some will be to "just say no." But please understand that I fully accept that for many, the principled and rational choice will be an unqualified and enthusiastic "yes."

1 comment: