Thursday, July 23, 2009

The Moral Platypus, or: Towards a Scientific Ethics, but Not Quite Yet

The platypus is a fantastic little critter for a wide variety of reasons. Its evolutionary significance alone is multi-faceted: it lays eggs, a feature that has been lost by the overwhelming majority of mammals; it nurses its young with milk but lacks nipples, which traits we might otherwise be tempted to think of as inseparable; its sense of electroreception is highly developed, unlike the rest of the mammals; and its venom is a striking example of convergent evolution, as the proteins it co-opted for the purpose are a different set from those in cobras, spiders, bees, etc.

The platypus is also of historical significance, as its discovery was first thought to be a hoax, and then precipitated a revision of our taxonomic system. Before the platypus, we had such a neat little system: fish and amphibians laid soft eggs in the water; birds and reptiles laid hard eggs on land; mammals had fur and bore live young which they nursed. Then comes along the platypus: it has a leathery bill that looks like a duck; but it has fur like a mammal; but it lays eggs like a duck; but it nurses its young like a mammal; and it has a cloaca, and flippers, and venom, and electroreception, and how the hell do we classify this thing?

When a system encounters a situation in the real world that it can't accommodate, it can be said to have broken. The platypus broke our taxonomy: our system, as we had conceived it, could not accommodate a living creature which we thought "should" have been able to fit into one of our categories. And so we invented the monotremes so that we could stick the platypus into one of our boxes. A moral platypus is not a literal platypus out in the world, it is a metaphor for some morally-charged situation that breaks one's system of ethics and forces one to either revise the system or ignore the problem.

It's easy to find a moral platypus: take your ethical system and find its central source of normativity (happiness, duty, virtue, what-have-you). Now take this value and pit it against any other deeply-held value (survival, liberty, justice, etc.). Then try to hash out which value ought to take priority in the situation (while controlling for all other factors like a good thought experimenter, of course), and if you've done it right, you probably won't be able to arrive at a satisfactory decision. Note that a moral platypus is distinct from a tragic dilemma: while they may certainly overlap, the hallmark of a tragic dilemma is that there is no good outcome available, while the hallmark of a moral platypus is that outcomes which may be naively interpreted as "good" or "mandatory" are counterintuitively evaluated as unacceptable on a principled analysis.

Trolley problems are a great source of moral platypi, especially the first time around. For the uninitiated, "trolley problems" refer originally to a paper written by Phillippa Foot in which a moral dilemma was considered: a trolley driver, about to run over five people, could take action and instead kill only one person. Should the action be taken? That depends on whether there's a moral difference between killing and letting die. Foot goes on to show that, no matter how one answers, trying to isolate that moral principle and apply it in other situations leads to similar problems with unexpectedly contrary outcomes.

One important ground rule that often seems to get lost in the shuffle of trolley problems is that of stipulation. In thought experiments, certain conditions may be stipulated - they simply are the case, for the sake of argument, based on nothing more than a person's say-so. Logically impossible things may not be stipulated (you can't say, "Suppose you ran into a square-circle," because square-circles are impossible), but unknowable things can be stipulated - and refusal to accept logically possible stipulations is tantamount to not taking the experiment seriously. In the most basic version of the trolley problem (killing one versus letting five die), many respondents attempt to get around the problem by saying that they would try to stop the trolley, or redirect it such that nobody is killed, or any of a number of creative solutions which nevertheless do not abide by the stipulations of the problem: you may kill one by action, or let five die by inaction. Those are your choices. Given only that option, which is right?

Still, respondents will attempt to say that the blame lies elsewhere, on the mechanic responsible for making sure the brakes work, on the bystanders for not warning whoever is about to get run over, or even on the driver for failing to ensure that the trolley was in proper working order. The point of these dodgy attempts seems to be to fix blame on any other causally antecedent action so that the driver may be concerned only with damage control (rather than the harder work of figuring out what ought to be done), since the moral heavy lifting has already been accomplished by some villain. Again, this is a refusal to take the example seriously: however the circumstances came about, they are by stipulation accidental. That is, the point of the problem is that no person is morally blameworthy for the situation as it stands, such that the trolley driver is the only active moral agent. Again, for clarity, the problem is as to whether it is morally better to kill one by action or to let five die by inaction, all other things being equal - using the particulars of the situation to get around the problem is simply ignoring the question.

For a trolley driver, the answer may be easy enough, but Foot doesn't stop there. Her next challenge is to put a surgeon into a similar dilemma, and while most would say that it is better (or at least permissible) for a trolley driver to kill one rather than let five die, almost nobody would seriously argue that it is better for a surgeon to kill an innocent bystander in order to save five dying patients. The details, again, are unimportant: it is stipulated that the only relevant consequences are that a healthy person is killed, and five otherwise doomed people are instead saved in the process. Why is it that most would allow a trolley driver to kill one so that five may live, but not a surgeon? It may have to do with notions of fairness, cultural norms, or intuitions about what the acceptable hazards are around trolley tracks versus hospitals (trolleys can at times be legitimate hazards; surgeons ought never to be).

Another moral platypus is the "utility monster," a hypothetical creature from the imagination of Robert Nozick, who is vastly more effective at converting resources into utility than any other moral agent. If utility equals happiness, then the utility monster feels more intensely than any other agent (or all other agents put together). The utility monster can only be pleased by methods which cause intense suffering to many people, but the pleasure experienced is stipulated to outweigh the suffering of the victims. Without this pleasure, the utility monster feels an agony far greater than the sum of that which would be experienced by any potential victims, and the pleasure they would feel at living their lives unimpeded is less than that which could be felt by the utility monster. The dilemma is simple: feed the beast and maximize utility, or deny/slay it and be a bad utilitarian. What do you do?

Some may attempt to say that this is "against the rules," that all moral agents are equivalent in value. If that is the case, then what makes a moral agent? It seems too anthropocentric to state that only humans are moral agents, not to mention the fact that we'll have a serious taxonomic problem as we go back in time to say when humans started existing (i.e. when our ancestors first became moral agents). If the capacity to experience pleasure and suffering alone constitutes a moral agent, then we already treat ourselves as utility monsters in relation to other species. How many insects, game animals, wild predators, and others are killed every day for our sake? If moral agents are equivalent, then the pain of a single ant, termite, or bee counts as much in the utility calculus as our own. If all moral agents are not equivalent, then it must be the case that there is something about us that makes our moral capacities weigh heavier than those of bees, and the utility monster is simply able by stipulation to do that thing to an even greater extent than we can, robbing the objection of all force. Or we really are morally equivalent to other sensate beings, and we are all of us monsters of unconcscionable magnitude. So, once more with feeling: do you feed the beast, or be a terrible person?

Deontology, as a system, tends to be the most honest (and the least interesting) about moral platypi. If duties are duties and that's that, then any intuitively problematic situation lends itself to only two interpretations: either we haven't figured our duties properly, or we're simply evil for feeling bad about the right decision. Hell, these aren't even mutually exclusive! A moral platypus can't illustrate a conceptual shortcoming in a system of duties where nothing else counts; at least, not in the same way as the utility monster shows a conceptual shortcoming of the utility calculus. When it's stipulated that the world will end unless you tell a lie, die-hard Kantian hand-wringers will say that it doesn't matter what happens to the world so long as you retain your moral purity by sticking to the truth, consequences be damned. Nearly everyone else will say "That's stupid" and saving the world is worth getting your hands at least a little dirty. The only trick here is to find which unacceptable action will be tolerated in order to avoid which unacceptable outcome, and hey presto! You've broken another deontologist. Suppose I say that "Wacky God" is in charge and says that everyone goes to Hell for satisfying their duties - should everyone tell the truth and go to Hell to be tortured forever? Strict deontology says yes.

Virtue ethics, at the other end of the spectrum, is exceptionally difficult to pin down on this issue. Since virtue ethics takes its "oomph" from a robust cultural context rather than an explicit algorithm or ruleset, it's a bit more difficult to show systematic flaws. Instead, you have to start at a common point of agreement with whomever you are arguing, and then take it to absurd lengths. I'll try to argue with myself as an example. Yogendra Singh Yadav climbed a mountain while being shot at, and after taking bullets on the way to the top, ran into machine gun fire to throw a grenade into a bunker (killing the four men inside), then went into another bunker after taking several more bullets and killed everyone inside with his bare hands. Where on the spectrum of courage, between cowardice and recklessness, would such a set of actions fall? Cowardice is out, for sure. But it would be reckless of anyone to attempt such a thing - you'd almost certainly be killed! But Yadav survived, so was he just courageous enough? But how could he have known how courageous to be beforehand? And if he couldn't have, then shouldn't he have refrained from the attempt? And if he should have refrained, then how does it come out good that he did not? And if it's not good that he made the attempt, then why reward his success? Why reward anyone's success for defying the odds? And if the moral status of his actions hinges upon success or failure, then aren't the "virtuous" really just "lucky?" How, then, can a person affect their own virtue any more than they can affect their own luck? And so on and so forth.

Yadav is a moral platypus for virtue ethics, not only because it's difficult to see how other people should apply the lesson of his example to their own lives, but because it's difficult to figure out just what the lesson is in the first place. Virtue ethics, as a system, has trouble accommodating the life of Yogendra Singh Yadav. He does not fit. Most people, when faced with a moral platypus, will either ignore the problem and forget about it in a few hours or days, or else make a revision to their ethics which accommodates the platypus (sometimes causing problems elsewhere, sometimes not). The point, however, is not that any particular moral platypus exists; the point is that, in principle, a platypus may come up at any time. A moral system may be broken by simply asking, fancily, "But what happens when it breaks?" OK, fine, big whoop. Who cares? Well, if a moral platypus is able to break your system, it indicates that there is a flaw with your system - certainly, addressing and accommodating a platypus will make your ethics better than it was before, so does that not entail that your system was worse in that previous state?

Again, big whoop: all that means is that revising your ethics will be a never-ending project, right? I mean, Newtonian physics is way less accurate than quantum mechanics, but even NASA uses Newtonian physics to send robots to other planets - the point being that a system doesn't have to correspond completely to reality in order to achieve robustly satisfying results. But on the other hand, ethics might be more like biological classification than like physics, where the occasional platypus serves as a reminder that our classification system, while relying on objective criteria, is fundamentally arbitrary and not really any better than a similarly principled and coherent system of vastly different criteria and categories.

So which is it? Is ethics more like physics or more like taxonomy? Or, to paraphrase the comment that led me to write this post, is the good more like etiquette or more like a duck? Well, the chief difference I can think of between physics and biological classification, or etiquette and a duck, is quantification. Both systems involve observing phenomena, labelling things, and evaluating things; but in physics, all these phenomena, labels, and evaluations are accompanied by unambiguous numbers which bear experimentally verified relationships to one another. In biological classification, we just made up a bunch of categories after noticing that some animals were more like each other than they were like others (yeah, we've revised it since then, but still...). To be fair, molecular genetics has done a great deal to straighten out the general family tree, but where on that tree we choose to draw lines between all the potential categories is still just as arbitrary an endeavor as it ever was (more on this later). Ducks can be measured and meaningfully compared against one another in terms of height, weight, color, and so on, and all of these characteristics may be expressed with mathematical precision. Etiquette is an arbitrary social construct that picks some behaviors and calls them "rude" or "polite," and while it draws from an objective list of criteria (like biological classification), the particulars of the system itself are arbitrary (also like biological classification). Ethics, in evaluating events as "good" or "bad," still draws from a list of objective criteria (like biological classification and etiquette do), but what these criteria are, and the relationships among those criteria, are entirely arbitrary (at least, at present).

Getting back to the relationship between molecular genetics and biological classification, we could make taxonomy as objective as physics by basing it not on arbitrary categories but degrees of relatedness. Scientists in white lab coats who make the title of "Doctor" sound sexy are already working on this: it's called molecular phylogenetics. We'll have to do away with a lot of the terms we're accustomed to using, and probably invent quite a few more, but we could express all our phylogenetic ideas in terms of the molecular relatedness between organisms, if we only sat down and did the math. This is because molecular genetics provides us with a stuff of relatedness, a measurable phylogenetic substance that may be unambiguously compared between objects (well, the results are ambiguous insofar as our methods and equipment are imprecise, and there are other problems, but we can still do science to the phylogenetic tree!). There is no such moral substance, no stuff of goodness which we may measure with any degree of precision whatsoever. Now everybody stare off thoughtfully into the distance, stroke your chin, and say, "Unless..."

Under an ethics of happiness, there is in principle a way to find a "hedon," a unit of happiness, by observing brain states and figuring out a way to measurably compare them; but there are other morally charged terms such as justice, equality, mercy, and so on which are not nearly so easy to quantify (let alone compare). We could do away with these concepts, as in the example of a taxonomy of relatedness, so that we're only talking about happiness when we talk about the good. But then justice comes along looking for a fight, and as soon as you make happiness and justice fight each other, you're going to run into a problem: why should ours be an ethics of happiness, rather than one of justice, sex, prosperity, irony, or plushies? What standard of value should we use? Which system of ethics is best?

Uh oh. We have no principled way of comparing ethical systems on ethical grounds, because the ethical grounding required to do so necessarily relies on an existing ethical system. Newtonian physics and QM can be meaningfully compared in a whole lot of ways - chiefly, their relative degrees of accuracy and difficulty. QM is more accurate, but also more difficult, than Newtonian physics; but the accuracy afforded by QM is on scales below NASA's concern, while the computational demands of QM are of great concern. Newtonian physics affords accuracy that meets NASA's needs rather handily, while the computational demands are easily met by NASA's budget. For NASA's purposes (purposes like staying under budget, putting robots in space, and slingshotting probes out the solar system), Newtonian physics is just better than QM. But when our concern is ethics, we cannot compare ethical systems on an ethical basis without descending into meaninglessness. If our purpose is to maximize human happiness, of course an ethics of happiness will suit our purposes; if our purpose is to get everyone following the rules, of course an ethics of duty will suit our purposes; if our purpose is to get everyone acting like role models, of course an ethics of virtue will suit our purposes; and if our purpose is to see plushies everywhere, then of course an ethics of plushies will show us the way.

But purpose is a chosen value, not a dictated or discoverable fact. And since we can only distinguish between ethical systems on the basis of satisfying a chosen purpose, then this inevitably entails that all imperatives are hypothetical: an ethical system can only generate "shoulds" in relation to a purpose, and since purpose is ipso facto arbitrary, any "shoulds" generated are therefore fundamentally arbitrary themselves, not objective. Are there good and bad purposes? Of course, but only in the context of certain ethical systems, and which one of those we choose will rest upon preexisting values, and so on and so forth and blah blah blah. My point with all this is that we don't just have a young, underdeveloped, or bad science of ethics "for now" - ethics, as it has been done since the word Go, has been non-science insofar as it has attempted to get a "should" out of purposeless reality, or to find a way to decide which method of generating "shoulds" we "should" use, or what version of "the good" is "goodest."

So what do we do? Well, to be truthful to the point of boring, we keep on keeping on. We can decide (individually, of course) to go with an ethics of happiness, and there's nothing "wrong" with that. We can even get all scientific about our happiness, measuring hedons and right-ons and hate-ons and whatnot, but we have to keep in mind that it's happiness on which we are so enthusiastically doing science, not ethics. By choosing an ethics of happiness, we are doing ethics arbitrarily, and then we have the option of doing happiness scientifically. We should also be aware that, while happiness may be our main concern, we probably also want to do other things like justice, sex, prosperity, irony, and plushies.

This is getting long. More later.

With acknowledgment to Jack Phillips, the moral ironist.

1 comment:

Kallan.G said...

We'll call it fate because I've been having the same argument and coming to the same conclusion.

Great read as always.