Tuesday, March 30, 2010

I don't think that argument suggests what you think it suggests.

Alvin Plantinga recently made an appearance roundabout my neck of the woods. My good friend came out from grad school to see him, so I kind of had to go as well. I mean, catching up with a buddy in grad school and picking on a 77-year-old theist? How could I miss out?

I pick on Alvin because I like him. He's a lively and affable guy in person, and really quite bright - even if he thinks that entities can be defined into existence. I just happen to think he's sadly misguided, and in an innocent enough way to justify paraphrasing The Princess Bride. Anyway, the good doctor's talk was on his evolutionary argument against naturalism, a pretty decent piece of reasoning that just so happens to cut against him in the particulars. If you don't feel like clicking through, here's the Fucking Short Version:
1. The probability that human cognitive faculties are reliable1, given naturalism and evolution, is low2.
2. If we accept both naturalism and evolution, and understand premise 1, then we have a defeater for the idea that our cognitive faculties are reliable.
3. If we have a defeater for the idea that our cognitive faculties are reliable, then we have a defeater for any belief we may come to hold with those cognitive faculties (including a belief that naturalism and evolution are true).
4. Therefore, we should not believe that naturalism and evolution are true, because they are self-defeating.
Conclusion: Therefore, fucking magic!

Notes:
1 - Weasel word!
2 - So what?! The Universe is a big place and rare things happen all the time.
OK, we've some unpacking to do here. As for that first premise, Plantinga leans on a quote from Patricia Churchland on the utility of truth for evolved animals: "Boiled down to essentials, a nervous system enables the organism to succeed in the four Fs: feeding, fleeing, fighting, and reproducing. ... Truth, whatever that is, definitely takes the hindmost." Plantinga uses this to develop his point by saying that natural selection can only work on behavior, not on beliefs (nevermind the fact that beliefs often inform behavior - seriously, don't mind this for now!), and therefore the truth value of this or that belief is irrelevant next to how adaptive or maladaptive the organism's behavior is.

This is, well, true. Natural selection simply can't act on what you believe independent of everything else. Plantinga goes to great pains to make this point. All other things being equal (which they never are, but still, we're ignoring that for now), the truth value of a belief is in fact irrelevant to the evolutionary process. To put his argument in more Cartesian terms,
"My evolved brain is not perfect, therefore it screws up. Because it's all I've got to work with, I might not be able to know whether it's screwing up at any given moment, because it might be screwing up right the fuck now. Therefore, I can never be 100% certain that I'm in my right mind."
I want to press Pause to let everybody know that I am with him so far. Really and truly (I just conclude that we must live with "less than 100% certainty"). Here's where things start to get real funny, though:
"Since I can never be 100% certain that I'm in my right mind, I might even be wrong about that. Because this uncertainty is self-effacing, I ought not to buy it. Therefore, I shouldn't buy the reasoning that brought me to this conclusion. Theists don't have this problem because we believe that our brains were purpose-built by God, so we can trust them."
Not joking! He says, "If my brain is evolved (i.e. not built by God), then I shouldn't trust it, even when it tells me that my brain is evolved; so I'll go with a belief system that ignores the problem of self-doubt instead." I mean, Alvin Plantinga wouldn't say that he's ignoring the problem, but that's what he does. Look, just because cars can break down doesn't mean that my car is broken down right now, and anyway you don't need to believe in Platonic Devices to avoid the uncomfortable possibility that your car might break down tomorrow. You can in fact believe something with 95% certainty and accept that one time in twenty, you'll be surprised. I mean, holy shit.

Enough cussing for the moment. I'm going to go right for premise 1 by granting it: given naturalism (i.e. not supernaturalism) and evolution (i.e. not IDiocy), the probability that my cognitive faculties are "reliable" in any strong sense is fairly low. Now, neither Plantinga nor I wish to speak of perfect reliability here; he even gives the example in his talk that if you ask five different witnesses about a car crash, you'll get five different stories (he says that their reliability is in the overwhelming agreement upon background beliefs that make their stories possible: the crash occurred on Earth and not Mars; the crash involved cars and not boats; cars drive on roads and not rainbows; blah blah blah). However, in order to achieve survivability, some reliability is necessary: bees need to apprehend reality to a degree in order to dance out directions to their cohorts, for example. So we're talking about a gradient of reliability here, really. He granted this during the Q&A session.

But then, that really gives it up, from his end. When we speak of the probability (P) of a degree of reliability (R) occurring in nature, given no supernatural input and a competitive environment, we no longer need to speak in terms of absolutes. Rather, we have freed ourselves to talk in terms of how reliable we need to be, how long we have to get there, and how lucky we're allowed to get along the way. Early on, the demand for R will be low, raising P (since lower Rs are more achievable). As time goes on, R can be refined, and in a competitive environment this will bring about ever-stricter requirements for higher Rs - so larger timescales also improve our P. Now I, for one, was born a piss-poor thinker, and I had to be educated rather heavily in the proper use of my gray matter - this education, in turn, was based on a long history of competitive educational systems in a world of scarce funding, and informed by a long tradition based on accumulating and revising knowledge.

So really, to get the R which I enjoy this very day, I only had to be born with enough R to be worked upon by something that was more or less evolved in its own right to do just that. And looking back before that in history, I mean, wow! People were nuts! They believed all sorts of crazy shit; Hell, we still do! Even within humanity, we see wild variance for R, from scientifically literate skeptics at one end to mental patients and uneducated children at the other. The reliability of our cognitive faculties is a fluid thing which can be honed or stunted; it can improve or atrophy over time.

Now comes the question, "How lucky are we allowed to get?" After all, homo sapiens is but one species among many on this teensy little rock of ours. Would it not stand to reason that at least one organism, with enough time, would develop enough cognitive reliability to improve upon it more or less exactly as we humans have in fact done? Plantinga's response was to ask, "Well, who's to say that it would be us?" Argh. I pointed out that whichever species did that would be the one to have these sorts of conversations, and then Plantinga said, "Well, maybe it's dolphins and they have the good sense not to argue about such things." Thank you, doctor. By this point, I had pretty much asked three questions, and things needed to be moved along. Ah, well. It was fun while it lasted.

This post was featured in the 52nd Humanist Symposium.

Saturday, March 20, 2010

101 Interesting Things, part forty: Cnidocytes

The C is silent, which may ease pronunciation. Cnidocytes are basically nature's single-use, single-cell harpoon guns. Not joking. Here, just look at this diagram:
OK, so yeah, that hair-trigger thing is called a cnidocil, and what it triggers is a load of calcium ions to surge into the main body of the cell. Then, thanks to the magic of osmotic pressure, water rushes into the cell from outside, turning the folded-in cell inside-out and propelling a tiny barb on a string into the offending organism at upwards of forty-thousand Gs. As you can imagine, these things don't come cheap: they're something of a pain to make, and so cnidarians (which, whaddaya know, are the things that have cnidocytes) also use chemoreceptors to detect whether or not they're dealing with a prey organism and then fire their cnidocytes in batches.

This is rather like a forward reconnaissance group determining that, yes, the enemy is here, and then a battery of artillery guns firing on the enemy position. Except that the forward-recon-group-slash-chemoreceptors are tasting the target to determine that they're the enemy. And the artillery-guns-slash-cnidocytes are hucking threaded needles full of poison really, really fast at the targets. Oh, and the gunners die after firing one shot. So it's really not like it at all, but now you should be picturing guys in camo licking their enemies, then a bunch of other guys in camo behind them hucking poisonous needles at the other side as they explode in death and organs (this may or may not be what I think of the army). Behold the power of imagery, and how I wield that power with something that might be like panache!

The point is that while we homo sapiens have refined the process a bit, jellyfish and anemones have been doing this shit for ages. Like, literally. Now I need to find something interesting that's not biology, before this turns into "Mother Nature Did it First: The Blog" while I'm not looking.

Sunday, March 14, 2010

101 Interesting Things, part thirty-nine: Taste

In philosophical circles, the "what it's like" of this or that experience is called its qualia, or if you prefer large and impressive words to small and difficult words, its phenomenology. Philosophers have rather admirably tackled this issue, with some of the famous (read: "familiar to me") examples including Nagel's What Is it Like to Be a Bat? and Jackson's Epiphenomenal Qualia. Much has been made of the importance of qualia in philosophy, case in point being Jackson's paper where he uses the very subjectivity of experience to argue for epiphenomenalism. Thankfully, he seems to have come to his senses, though I'm not quite sure how he missed the problem with "epiphenomena exert no causal effects; qualia are epiphenomenal; Mary's quale of red causes her to say 'wow'." Whatever, I used to think that a man dying and coming back to life in three days was a noble sacrifice instead of a stunt; I can't really judge this guy for missing the glaringly obvious.

Speaking of the obvious, I think there's a rather easy example available to dispense with the weighty implications attributed to qualia - in terms of both what qualia might imply, and what we might disagree on when speaking of qualia. The Alert Reader will have guessed by now that this easy example is the sense of taste. And speaking of abrupt transitions, it's time for a biology lesson! We begin with the Wikipedia article, because it is clear and concise, and I doubt my ability to abstain from plagiarizing it anyhow:
Taste (or, more formally, gustation) is a form of direct chemoreception and is one of the traditional five senses. It refers to the ability to detect the flavor of substances such as food, certain minerals, and poisons. In humans and many other vertebrate animals the sense of taste partners with the less direct sense of smell, in the brain's perception of flavor.
I want to make it clear here that we are talking about chemoreception, the sensing of chemicals. Not flavors. Flavors are the product of your brain, and they do not inhere in chemicals, but are rather the result of complex interactions between your neurology and what you put in your face. The quick & dirty breakdown is that your taste buds detect the molecular shapes of the things you eat (such as sugars, proteins, or alkaloids) or the presence of dissolved ions (acids are sour, salts are salty); then, nerve impulses are transmitted to your brain to tell it what's going on; then, you react to whatever it is that just happened.

This is what makes things like artificial sweeteners possible. You see, certain chemicals which do not occur in nature commonly (or at all) can trick your tongue into thinking that something is there which might not actually be there, just because they're similarly shaped in this or that way to the sort of thing that abounded in our evolutionary history. Case in point: aspartame. While the body still metabolizes it, aspartame is experienced by the body to be about two-hundred times sweeter than sucrose (though of a somewhat different taste, depending on who you ask), making its caloric content negligible for about the same amount of sweetness. Models of how sweetness works have grown in sophistication over time, the current model describing some eight reception sites and culminating in the development of lugduname, which is estimated to be some two-hundred-thousand times sweeter than sucrose.

Just in case we're not clear yet, let me say this in a different way: your taste buds are shape detectors and ion detectors which react with certain chemicals to send nervous impulses to your brain. The "taste" of this or that thing is the product of what your tongue tells your brain it is shaped like (or its pH value/alkali ion content). Seriously. Isn't that fucking weird?! And even weirder, there are substances called sweetness modifiers which temporarily but fundamentally alter the way sweetness is perceived, such as lactisole (which inhibits the taste of sweetness) and miraculin (which makes sour things taste sweet instead). I mean, whoah!

We were talking about qualia earlier, and now it's time to get back to that. I think that taste is perhaps the most vivid demonstration of the crowbar separation (thank you, Eddie Izzard) between reality and how we perceive it. Once again: shapes can be delicious, WTF. With touch, it's pretty easy to imagine sensations of pressure, wetness, or roughness being at least somewhat related to what's going on (with the possible exception of temperature - I mean, c'mon, rapidly vibrating molecules equals hot? Who comes up with this shit?). Same with perceptions of the visible wavelengths of light, or the audible wavelengths of sound. But taste? Give me a break! There just has to be some subjectivity going on here, because the "what it's like" is nothing like the "what it is"!

This also makes some of the debate about qualia seem rather silly. I'll briefly discuss two examples, first the analogical relationship between color-blindness and supertasting, then the inverted spectrum argument. I'll close with a quick Take That! directed at proponents of intelligent design, and a little treat for sticking it out through this interdisciplinary ramble.

First off, when we consider whether qualia are "the same" among different people, of course they're not. Nobody ever said they were, but when you consider that the colorblind see rainbows in a completely different way than most other people do, and that some people experience taste way more intensely than most other people do, then I think it becomes trivially easy to imagine that no two people experience the world in the same way. Even Epictetus was able to point out that two people can have wildly differing experiences of the same situation, and so it is not necessarily the world itself that causes us joy or suffering, but our attitudes toward it - what I would add is that a person may be just as powerless to change his or attitudes as to change his or her taste buds. Sometimes this can happen for no apparent reason, as with my early disgust and later love for mushrooms. Sometimes it happens due to experience, as with my aversion to light beer. Sometimes it doesn't happen no matter how hard you try: I will never be able to enjoy a resin spurge sandwich, because resiniferatoxin is so much more potent than capsaicin, it's not even hot any more (you just go into anaphylaxis and die).

With that in mind, the inverted spectrum argument is even easier to de-fang. For those who didn't click through (and I throw potholes all over the place, so I really don't blame you), the argument goes as follows:
1. Metaphysical identity holds of necessity
2. If something is possibly false, it is not necessary
3. It is conceivable that qualia could have a different relationship to physical brain-states
4. If it is conceivable, then it is possible
5. Since it is possible for qualia to have a different relationship with physical brain-states, they cannot be identical to brain states (by 1).
6. Therefore, qualia are non-physical.
Now, is it just me, or is the question simply beggared in premise three? I mean, if qualia could have a different relationship to one and the same physical brain-states, then of course they're not physical! Fuckin' duh! But this is exactly what is at issue. We see over and over again in the literature (Just look! Like, anywhere!) that when you change the sensory apparatus, you change the qualia. To a person who is relatively up on the neuroscience, while taste and sight vary greatly across the spectrum of human experience, the conceivability of two possible qualia X & Y for the same brain state Z is somewhere on the order of that for two possible diameters A & B for the same circle C. It's just that no two physiologies are perfectly identical (not even the same physiology at different points in time), so of course subjective experiences vary.

As for the IDiots, well, I'm just going to come out and say that ethylene glycol tastes sweet and is very toxic. So that's a poison detection failure right there. But let's not forget that to supertasters, a wide variety of totally healthy foods taste absolutely repulsive; and to non-tasters, a wide variety of potentially fatal substances taste perfectly fine. Oh, and how about the fact that the sense of taste is based primarily on superficial characteristics of molecular compounds, and is therefore rather easy to hack?

OK, so I'm done tying loose threads together for the night. Seriously, I got way too sidetracked while writing this post. I leave you all with the following cookie, as promised. This one tastes like tough decisions:
Click it!

Tuesday, March 9, 2010

You should read The Slumgullion!

My friend Jack works in Navy SIGINT, which is fucking impressive. He's also a great writer and a gamer who thinks a lot (or thinks a little, but very well). He combined those interests at his old blog, Philosophy of Games, which had been up on my sidebar for a while.

Well, he's got a new blog now, called The Slumgullion. If you enjoy wordplay and randomness, then you should head on over and check it out! I'm a fan of his ghastly little tales, in particular.

Saturday, March 6, 2010

Arguing on the Internet: Is there intelligent life on Earth? No, really - is there?

The following is an elaboration on a conversation I've been having with paradoctor over on Daylight Atheism. This whole mess started with a rather scholarly discussion of the conceptual failings of IDiocy, segued smoothly into talk of abandoned dinosaur cities on the Moon, and so far has come to paradoctor asking me if I think there's intelligent life on Earth (whether seriously or facetiously, I cannot tell). Then I thought of the following.
There are days when I would be inclined to say, "No, intelligent life does not exist on Earth." But of course this is absurd, and I am not typically interested in advocating for absurd positions. Although I do freely admit that reality is absurd, I think the only positions worth taking are non-absurd ones - I agree with absurdists on a lot of things, in other words, I'm just not one.

In all seriousness, in my experience, what many people mean by "intelligent" is something along the lines of "has first-person experiences". However, we can't test for this directly, by the same line of reasoning that I can never "really know" what it's like to be a bat because I am not a bat. So we instead come up with things like the Turing test, which of course works for determining whether our candidate is able to do the things an intelligent agent is also able to do, but leaves the rather glaring problem that a Turing-complete philosophical zombie is absolutely indistinguishable from its genuinely "first-person experience-having" counterpart. Then comes Daniel Dennett to the rescue, "We are the zimboes," and blah blah blah.

As Dennett himself explains the idea, "Zimboes thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer pains – they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!" His point is that there is no meaningful difference between thinking and thinkingZ, therefore zimboes are no different from persons as we ordinarily think of them - le sigh... or thinkZ of them - and if there's no difference between two things then they're the same. Of course, this leaves the problem of whether things we might not even consider to have first-person experiences are zimboes just like ourselves. We have no reason to think so, of course, but the whole point is what if we're wrong? Pascal's Wager of animism, in other words, with a weird line-drawing problem that seems to center around nervous systems or perhaps circuits (we'll get to this line-drawing later).

The other main camp I've encountered means by "intelligent", "can demonstrate learning", conveniently bypassing the p-zombie problem. Of course, on this analysis, dogs and mice are intelligent for being able to learn non-instinctive behaviors and solve problems - which is generally unproblematic, as they are cute and fuzzy to most people, most of the time. But even pigeons have been shown to form superstitious behaviors, which to me would indicate both an ability to learn and a high probability of first-person experience. The problem here is one of principled interpretation: we say that a mouse has "solved" a maze by reaching its end, having been motivated to do so by the conditions of the experiment; could we not then say that the elements in the Miller-Urey experiment "solved" their situation by forming amino acids, having been motivated to do so by the conditions of their experiment? Both cases involve watching the behavior of particular configurations of mass-energy in an artificial environment. While it might be unproblematic to attribute (admittedly limited) intelligence to insects, for they still have brains, even amoebas engage in what looks like hunting, and they're just single cells! In both experiments, the things being experimented upon merely behaved in whatever ways they could, and I see no principled reason to attribute one with intellect and not the other (given the smooth gradations that can be made at every step of the way in between). Could not we humans, for all our scientific endeavors, be seen as a mere means in reality's effort to understand itself? A sort of cosmic navel-gazing, if you will, in which introspection on things like "purpose" is itself our very "purpose"!

While such wildly irresponsible speculation may make philosophers feel good about themselves, the slippery slope cuts both ways here: we can show that many organisms which "demonstrate intelligence" are completely uncommunicative in ways that we are able to recognize, so who's to say that other things don't simply possess lesser and lesser degrees of intelligence, whether or not they have first-person experiences? I mean, sure, we can draw a line somewhere between fish and rocks, but precisely where we draw it will make a difference (I promise, we're getting there!). Cutting back, when we say that a thing "acts as if" it has a higher level of intelligence than can be reasonably attributed to it, what's to say that purportedly intelligent things almost certainly possessed of first-person experiences aren't merely "acting as if" they have intelligence? The mystical "I" at the helm of consciousness is, after all, a story that the brain tells itself, ret-conning and confabulating memories will-ye, nil-ye.

Dennett argues (very persuasively, I think) that consciousness is more like fame than like being on television, insofar as it happens distributively and by degrees rather than in a strict "on or off" sense. The degree to which we are aware of the calculations performed by our brains is, to my mind, rendered irrelevant - the parts of our brains that execute whatever steps they have learned to execute are no more intelligent than the components of a computer running an installed program (and no less!). Or as Edsger Dijkstra put it, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." But at the same time, any matter could be said to be "running" whatever "program" would perfectly describe whatever it is that it does. In fact, a biophysicist named Gregory Engels has discovered that plants accomplish photosynthesis at their level of efficiency by exploiting principles of quantum mechanics. The Scientific American version actually credits them with performing the computations, but my point is, what's the meaningful difference? (I suppose the "daffyrance" would be that the SciAm folks have never heard of pareidolia.)

So in one sense, "intelligent" is a word and words are made-up, so of course there's no intelligent life on Earth. In a more pragmatic sense, of course there's intelligent life on Earth, but the interesting question is how much of it can we call intelligent? Between these two extremes - nihilism and treating the matter as a necessarily open question - lies just a mess that relies on presuppositions and other motivations to get a clear answer. In short, the only clear answer is a semantically bizarre yet unequivocal No; Yes answers are by their very nature unclear and open to debate (which is awesome!). Of course, this won't stop a bunch of really sexy heroes sporting labcoats and degrees from pointing out which arbitrarily-arranged distinctions have this or that pragmatic value, and this is perfectly fine. But I have neither a labcoat nor a degree, I am instead interested in understanding the whole problem, and what this understanding tells me is that on the one hand there's nothing, on the other hand there's everything, and in between is just a mess. Viva la mess!

OK, finally. This all matters because a computer-based Synthetic Intelligence (there's nothing artificial about the Turing-complete sort of intelligence we're talking about here) would seem to warrant all the rights and responsibilities of any other person. And why not? If you were to upload your complete brain state to a sufficiently advanced computer to "run" you past the expiration of your organic body, would you not then be one such Turing-complete computer-based SI? Would you not want the termination or erasure of your process to be regarded as a murder? Anyone would! Including an SI "born" on a hard drive with no biological precursor. But how can you murder something that's not alive to begin with? And what about copies that modify themselves or blend with each other to the point of no longer being the same "person"? Do we need to count any form of intelligence as "alive"? Or can we simply regard the extinguishing of an intellect as "murder"? This would seem to handily resolve difficulties with people in persistent vegetative states, as well as the abortion of fetuses lacking brain tissues (at the very least). But eating meat would now well and truly be murder, unless we either genetically engineer brainless animals (which doesn't sound all that bad to me), or only eat animals that die of natural causes (which has problems of its own). We can push the problem back a bit by talking about legalistic "citizenship" rather than philosophical "personhood", but the issue remains that we want to make our in-group as wide as possible while still leaving the "stupid mass-energy" open to the sort of resource exploitation that we won't perform upon our persons/citizens/what-have-you. Double trouble if we assign intelligence to rocks, for "running" their "rock programs". (Then PETA changes their name to People for the Ethical Treatment of Mass-Energy, PETME, and comes up with new slogans: "Mining is murder!" "Minerals are people, too!" "Rocks are not ours to skip across the ocean waves or keep as pets in our homes!")

So to answer paradoctor's question, at long-winded last: there is intelligence on Earth in every way that counts; there is also life on Earth in every way that counts; but these do not always "go together". You can have unintelligent life, and intelligent non-life, depending on your definitions. In order to meaningfully answer the question, we need a more robust understanding of the problem with respect to which the question is being asked - it matters what else will be at stake. Or we could keep our answers provisional and revise them as new issues come up. I mean, it's not like we're codifying a religion here or anything.