## Update Schedule

This blog regularly updates on Tuesdays, Thursdays, and weekends.

## Friday, June 26, 2009

### 101 Interesting Things, part nineteen: The Monty Hall problem

So a while ago, someone in my space game brought up the Monty Hall problem in corporate e-mail. There was some argumentation, but I remembered that it had just recently been discussed at Philosophy Club (recently in terms of meetings, not days) as an example of our intuitions leading us astray. I thought I'd bring it up here as one of my hundred-and-one interesting things.

The Wikipedia page (linked above) goes into great detail about how it's one of the most misunderstood problems in history. As cognitive psychologist Massimo Piatelli-Palmarini says, "...no other statistical puzzle comes so close to fooling all the people all the time... even Nobel physicists systematically give the wrong answer, and ... insist on it, and they are ready to berate in print those who propose the right answer." This is a prime example of an extremely common failure to reason correctly, and a lesson that I think anyone could stand to learn from. Hell, I got it wrong at first, and was very resistant to having my mind changed until it was shown in excruciating detail just where I went wrong.

So, to the point! For those who did not click the link or don't know about Monty Hall, the problem is: you're on a game show and have advanced to the final round. Before you are three doors, behind one of which is a car, and behind the other two of which are goats. You pick a door, and before opening it, the host stops and opens one of the remaining doors, revealing a goat. He then asks, "Do you want to keep what you've got behind door x? Or risk it all to see what's behind door y?" What should you do to maximize your chance of winning the car? Should you stand pat? Should you switch? Does it even matter?

Most people think it doesn't matter - a goat has been eliminated, meaning that of the two doors left, one has the car, the other has a goat, so it's 50/50. It will not make a difference whether you stand pat or switch, they say. This is incorrect, and in a big way. In truth, standing (as a strategy) will only win you the car 1/3 of the time, whereas switching will win you the car 2/3 of the time. Exploiting this common failure to reason correctly, and exacerbating the problem with some suggestive phrasing, Monty Hall saved his show a lot of money by handing out a lot fewer prizes than they otherwise might have.

The reason it is this way is that when you make your initial choice, you only have a 1/3 chance of selecting the car (the "right" door), and this probability does not change when one of your choices is eliminated. Standing, as a strategy, stakes your bet on picking the right door of three. Since there is a 2/3 chance that you picked the wrong door, there's a 2/3 chance that the car is behind a door you did not pick, and thus Monty actually does you a favor by eliminating a goat - which ought to make your choice all the more obvious.

Another way of looking at it is that there's (statistically) a 1/3 chance behind each door, and when you make your choice, you eliminate that door from the "opening pool," fixing its probability (since Monty will never open the door you pick). Since Monty will also never eliminate the car, but there's a 2/3 chance that it's behind the group of un-picked doors, eliminating one door "consolidates" the odds behind the un-picked and unopened door.

One of our corporation members proposed an apt re-phrasing of the problem which I will outright steal (with a little change). Suppose there are a billion doors (and still only one car), so picking the right one off the bat is only a one-in-a-billion chance. You pick one anyway, and then Monty opens all the remaining doors but one, showing goats behind all of them. The odds are overwhelmingly likely that you picked a wrong door, and the car is behind one of the other nine-hundred ninety-nine million, nine-hundred ninety-nine thousand, nine-hundred ninety-nine doors (the number is more impressive when written out, I promise). When you are given your second chance, it's obvious you should switch, since Monty's done most of the work for you. The same reasoning applies to the three-door scenario, it's just scaled down a bit and thus not so obvious, since only one door is picked by you and only one door is opened by Monty.

As an interesting side note, a less-obvious rephrasing could be that he eliminates one of the remaining doors instead of all but one, which just so happens to pan out as an identical behavior in the three-door scenario. Let's say there are four doors, so you have a 25% chance of picking correctly right off the bat. The remaining 75% is distributed evenly among the other three doors, until one is opened, at which point it's distributed among the two unopened and un-picked doors. Since we have no means by which to distinguish them, they're even, and both get half of the opened door's quarter, putting them at 37.5% each, so you should still switch. A five-door scenario, by these rules, yields a 20% chance for standing, and 26.7% behind each of three doors. As the number of doors rises, the margin of advantage to switching shrinks, but it's still non-zero and higher, thus making it always better to switch. The trend happens to have its peak in the three-door scenario.

I feel like this should be wrapped up somehow, but now all I can think of is other interesting problems in game theory, which is like the coolest thing ever.

## Thursday, June 25, 2009

### 101 Interesting Things, part eighteen: Hailstone Sequences

The field of mathematics is chock-full of interesting things, but most of these are only of interest to physicists and mathematicians as they involve concepts or methods well beyond the education of most. Some examples of these include the Riemann hypothesis, which involves the distribution of prime numbers in a way that almost requires higher education to even understand, and is really only interesting to cryptographers; or Goldbach's conjecture, unsolved since the 1700s and seemingly demanding of a simple solution, which states merely that every even number may be expressed as the sum of two prime numbers. Sure, it's simple - but you try proving it! Any aspiring mathematicians who want to have their dreams crushed are welcome to peruse the CMI Millennium Prize problems - open problems in mathematics which have collectively stymied the experts for probably a million man-years.

At the other end of the spectrum are problems within every layman's grasp, such as the Monty Hall problem (which shall get its own post in short order, now that I've thought of it). The one I want to talk about today is the problem of hailstone sequences. As the linked page explains, these are called "hailstone sequences" because the sequences generated by the following algorithm go up and down like a hailstone in a cloud. Take any number - for the sake of the cited article, we'll call it n - and follow these steps:
1a. If n is even, divide it by two. n/2=n'
1b. If n is odd, multiply it by three and add one. 3n+1=n'
2. Repeat step 1 with n'. Lather, rinse, repeat ad infinitum.
OK, now eventually, your sequence is going to end up repeating at 4, 2, 1, 4, 2, 1... Go ahead, try it. I've got time. There's even a little applet on the page linked above.

All right, satisfied? Now I've got two propositions that I'm just going to lay out there:
1. Every number put through these steps will end up repeating at 4, 2, 1, 4, 2, 1...
2. There is at least one number that does not end up at 4, 2, 1, 4, 2, 1...
Nobody - not one single person - has been able to prove either of these contradictory propositions. But one of them must be true! The common-sense approach is that any time you end up at a power of two (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, etc.), it's Game Over, and you're eventually going to hit on a power of two, so there. This is also true of powers of two multiplied by any power of ten, since it will end up at a power of ten which all fall into the same old trap (though for slightly more subtle reasons). In fact, you could conceivably show that all non-prime numbers end up at 4, 2, 1, etc., though I'm a bit fuzzy on the how of the matter. You'd probably want to start by showing that all multiples of three will do it, then all multiples of five, then six, seven, eight, nine, etc. (1, 2, and 4 are skipped for what should be obvious reasons), until you identified a general solution. But that still leaves the problem of primes, and prime numbers are ipso facto numbers that don't fit any pattern - the sieve of Eratosthenes (which eliminates all non-prime or "composite" numbers until the square of the next prime) demonstrates this by eliminating all numbers generated by regular patterns, leaving you what's left: the primes. Because of this, a general solution to prove proposition 1 (above) may well be impossible.

Frustratingly, it may be an insoluble problem - proposition 1 may be true but unprovable, meaning that proposition 2 can never be proven and the question shall always remain open. This possibility could itself be proven by showing that proposition 1 can never be verified in a way that neither confirms nor refutes proposition 2. Whether this can even be done is also an open question, though.

So scratch your head in wonder at this interesting emergent property of numbers, and then have a beer. I'm sure going to!

## Monday, June 22, 2009

### This is about as close to the Platonic form of Keyboard as you can get.

That is, without invoking the way-futuristic modular holographic keyboards they have in all those sci-fi movies. Look, the Optimus Maximus keyboard is a geek's wet dream. Take a look:
Yeah, all those little images on the left? Those aren't fancily-printed labels. They're teeny-tiny OLED screens. That's right: this keyboard is made up of 113 little display screens that can do pretty much anything you'd care to have them do. Letter displays are context-sensitive depending on whether you're holding Shift or Alt, you can set icons to be displayed representing whatever function you've hotkeyed to it, individual keys can even display Yahoo! widgets with the weather/date & time/currently playing song/etc. Check out the video:
In other news, I'm still reevaluating this whole polyphasic sleep thing. On the one hand, I like having free time, and fun is non-negotiable for me. On the other hand, falling asleep at my desk and waking up four hours later in bed with no idea how I got there is kind of annoying. I still enjoy writing this blog, so I'm going to keep it up - I guess I just need to budget my time better, and discipline myself to get up when my alarm goes off instead of hitting snooze or whatever. Also, I need to take my naps when I need them no matter what. People always want me to do something when I should be putting my head down for half an hour, and while it's nice to have an active social calendar, I have to stick to this schedule if it's going to work at all. Maybe I should try including a core sleep segment, while still taking naps at work, to the end that I'll still be getting reasonable amounts of total sleep but also freeing up chunks of time to do what I like at home. We'll see.

## Friday, June 19, 2009

### Pardon our dust, we are experiencing technical difficulties, etc.

I'm fucking up my sleep schedule something awful. I need to reevaluate. There will be a real post again on Monday, I just didn't want to go a whole week without doing anything at all.

Argh. This would be so much easier if I had real super-powers, instead of pretend ones.

## Sunday, June 14, 2009

### Cross-Post: Green fairies are just as imaginary as any other color

A victim of urban myth = me. I learned this weekend that absinthe tastes like butt and evil. And, according to that Wikipedia article, it's not even hallucinogenic:
Absinthe has long been believed to be hallucinogenic, but no evidence supports this. ... Today it is known that absinthe does not cause hallucinations, especially those described in the old studies. Thujone, the supposed active chemical in absinthe, is a GABA antagonist and while it can produce muscle spasms in large doses there is no evidence it causes hallucinations.
GABA (gamma-aminobutyric acid) inhibits nerve impulses, so giving it the day off will allow your nerves to fire too easily (hence, spasms). This is not a hallucination, more like a seizure. Furthermore, this stuff has to pass TTB (Alcohol and Tobacco Tax & Trade Bureau) testing, which means it's got less than 10mg/L thujone in it anyway (not large doses). Being 62% alcohol by volume, I have no qualms about chalking up these hallucinatory claims to the placebo effect. Further complicating this issue is the fact that some modern absinthe concoctions contain additional hallucinogenic ingredients, the goal being not to drink absinthe as it was drunk "back in the day," but to get the high attributed to it. Which, I mean, that's cool & all, but let's please not fool ourselves into thinking it's because of the absinthe. That's just backwards.

How this stuff ever became so popular is beyond me. It tastes terrible - I'd rather drink grain alcohol. It's like super-Jager, in that it has more licorice flavor in a shot than is present in a whole bag of licorice whips, and could probably be used as an industrial-strength solvent (and it has actually been used to disinfect contaminated water in the past). I tried two of the mixed-drink versions, wherein the absinthe is diluted to about the potency of wine, and was quite unimpressed. Doing a shot of it, however, was like having a licorice fireball tear through my esophagus.

I decided to do some homework and find out just what I was getting myself into, so I read a couple articles: one on the origin of Lucid, the absinthe in question, and another on T.A. Breaux, a chemist playing a major role in the modern absinthe revival. This is the guy most directly responsible for getting Lucid approved, as he discovered that his own absinthe (produced with authentic equipment, ingredients, and procedures), as well as the pre-ban vintage absinthe he had acquired, had very nearly no thujone in them at all - making it trivially easy to bypass the thujone concentration rule which allows "refined" absinthe (strong liquor without wormwood) to be sold. Breaux is like an absinthe connoisseur or something, which leads me to believe that either there's more to it than I have come to expect, or there are far darker corners of the human heart than I dare imagine.

It turns out that absinthe isn't supposed to taste like Satan's asshole. Magically, I guess. The stuff came into popularity during a rather devastating wine shortage in France, and so if it was used as a wine substitute, I suppose it ought to be sipped rather than gulped or shot - and this could make a rather major difference, as it would with whisky or cognac. I'm going to lock myself in a room and try science to see if I can get this stuff to work right. You know, make it taste like herbs and flowers rather than suck and fail. However, my optimism remains cautious, for I know that there are those who drink moonshine and other noxious toxins straight from a jug. I'll let you know if something exciting happens, but otherwise, you would all do well to stay behind the safety glass.

Additionally, the stuff technically wasn't "legalized" because the consumption and possession of it aren't illegal in the first place (though it can be seized with a warrant, oddly enough), just importation and sale without FDA/TTB approval, which involves thujone concentration and almost no other factor - if it's an alcoholic product, then it cannot contain wormwood, and for legal purposes this means coming in under the aforementioned 10mg/L thujone concentration. Other thujone-containing substances, such as sage, are completely unregulated. So, it turns out, authentic absinthe doesn't legally contain the substance thought to be the active ingredient in... authentic absinthe. Since no laws had to be changed, it simply had to get approved by the FDA, which was just a matter of putting in the time and paying the lawyers. Another major detail, I guess, since getting something approved is probably a far lesser legal ordeal than trying to repeal a ban on a particular substance. The moral of the story is that thujone - as well as wormwood, from which it is derived - aren't controlled substances. The Lucid website has an FAQ which also corroborates this information, as well as providing me ideas for my experiments.

## Saturday, June 13, 2009

### Cross-Post: I Love the Internet

Note: Back on my Playskool blog, I had a few posts which I thought were rather good in principle, but I didn't really develop or revise them. I want to fix that here on my grown-up blog, so I'm starting a Cross-Post section. While looking for material, I came across this, which I could not resist sharing. Without further ado:

So I had this great song stuck in my head all day, and I had only heard it once, so I tried to figure out what the words must have been... and I failed hard. Unable to proceed in that direction, I decided, "Fuck it! I'm making my own song! Now what, motherfuckers?!"

I think it turned out pretty well.

I Love the Internet
Inspired by this commercial and this comic. Words by me.
I love pornography
I love astronomy

I love the internet
And how it shrinks the world

I love Weebl's stuff
It brings me happiness
I love the newsy sites
With all their truthiness

I love the internet
And all its silly shit

I love neutrality
And cell phone media
I love banality
And Wikipedia

I love the internet
And all its dramedy

I love Pharyngula
And I love peer review
I love free browser games
And NaNoWriMo, too

I love the internet
Even the scary parts

I love Anonymous
And how they yell at jerks
I love xkcd
And all his silly quirks

I love the internet
Except for emo fags
Thank you! Thank you very much!

Second note: I don't actually like "the scary parts," just so we're clear. But it's part & parcel of the whole internet gig - if something can be great, then it can also be awful, and you can either take the bad with the good or throw it all. As my friend Jack is fond of saying, "Highs & lows, no status quos." Or as I like to say, "Being able to feel means that you will feel like shit sometimes - deal with it." I view the "dark side" of the internet in much the same way.

## Thursday, June 11, 2009

### Look, everybody! It's a Pip-Boy!

I really have no idea what else LG thought everyone was going to make of this. Unless it shoots lasers, does your laundry, and/or dresses your children, everyone's going to call it a Pip-Boy. I guarantee. Let's do a quick side-by-side:

I mean, for cryin' out loud, the screen of LG's device is even dominated by green. Don't get me wrong, I want one. I'm just saying.

In other news, since I really can't justify making a whole post about a video game that I'm only kinda-sorta anticipating, Halo 3: ODST = sandbox?! With stealth, health, and the triumphant return of the pistol? Well, OK. Maybe I'm anticipating it a bit more now. We'll see what September brings.

## Tuesday, June 9, 2009

### 101 Interesting Things, part seventeen: Prehistoric Booze!

Talk about getting back to your roots: Discover Magazine tells the story of Chateau Jiahu, an interestingly-named alcoholic beverage with an even more interesting composition:
For the past few years McGovern has been analyzing scraps of pottery excavated from a site in central China. Last year he announced that he had detected traces of the oldest alcoholic beverage yet discovered, a Stone Age brew dating back 9,000 years. When I visited McGovern's basement laboratory the day before, he handed me a plastic bag containing one of the shards. I could not get my mind around the stretch of human culture it embodied—a time period twice the span from the pyramids of Egypt to the pyramids of Las Vegas; Christianity rising four and a half times. Nonetheless, with this meager evidence, McGovern and the brewers at Dogfish have found their mission: to coax an ancient brew back to life.

Our ancestors were getting fucked up on this stuff in China around nine-thousand years ago. That's right: we've been boozing it up since before some people think the Universe even existed. Here's to the human race, getting wasted on whatever's handy since time immemorial!

Scientific American has a similar (though much more concise) article about several other similarly-themed ancient brews, you should check it out if you're the drinking type. I'd drink one just to say I'm getting drunk on proof that creationists are wrong.

## Monday, June 8, 2009

### 101 Interesting Things, part one follow-up

My initial post on 101 Interesting Things dealt with the Antenna Galaxies. I saw an image today that tells another interesting story about space: the Earth is a very small place in a very large Universe.

The Earth is shown, to scale, first next to other bodies in our solar system. Then the Earth vanishes to invisibility as our Sun is compared to other stars, and the Sun then vanishes itself as yet larger stars are shown. Finally, the Sun is shown as a pale cluster of pixels, to scale, next to the largest star known. To still have the Sun register, only about twenty-five degrees of arc length of the larger star can be displayed.

Next, the Hubble deep field image is explained. For those who may not be savvy, the Hubble deep field image is a picture that was assembled from very small, very sensitive images taken over a period of ten days by the Hubble telescope, gathering every single photon it could from a very, very small section of the night sky. About the width of a dime, held at seventy-five feet (or, if you're metrically-inclined, a 65-mm tennis ball held at 100m). Nobody thought we would see much of anything in there, but almost unbelievably, this oh-so-tiny section of sky held galaxies beyond counting, many over 13 billion light years away. Some of them are so far away, the light that is now reaching our eyes was formed by stellar fusion when the Universe was only 800 million years old, and has travelled for almost the entire Universe's lifetime just to reach us. After so long a time, there is precious little evidence of what was going on at the time, and extracting that from amid all the noise surrounding our planet is painstaking - but what we are able to find in even that is rich.

The fact that we are able to do this tells me something both very simple, and very important: the Universe is truly amazing. There are those who think that atheism - a purely scientific world view insofar as it asserts only that we must learn from the Universe by looking at it - lacks vitality, inspiration, or wonder. I know that some of them think this, and I once thought it myself; yet I can't help but ask in response to that, "Isn't reality enough?"

### An Open Letter to the Pepsi Corporation

Dear PepsiCo,

I am an avid drinker of sugary carbonated beverages. Recently, I noticed with great pleasure that you have been selling both Pepsi Cola and Mountain Dew with "real" sugar instead of high fructose corn syrup, and with the original logos on the cans. These drinks have become progressively more difficult to find lately, so I think it's a reasonable guess that you were doing a market test to see how well it did before making the decision to permanently sell it or not. I wanted to take this opportunity to share some "regular person on the street" information with you, saving you a lot of time and effort.

Selling your drinks with real sugar and the old logos is a good move, no two ways about it. You should keep doing this, and as long as the Coca Cola Corporation doesn't make a similar move, you will have cornered the market on this, especially if you are able to continue to undercut their prices (your drinks are cheaper at the local gas station than Coke's). In fact, you could probably sell them for even more.

With Pepsi Cola, it's not a very big deal - the Pepsi can design, in its most recent incarnation, looks fairly hip & cool (mainly because it's not too busy), and Pepsi still tastes good. However, you really need to take another look at Mountain Dew. Nobody thinks that "Mtn Dew" looks cool with those stupid spiky peaks on the can. It's dumb and you know it, and everybody else knows it, and nobody cares because - this is important, so pay attention - nobody gives a shit what's on your cans. Not one single person. As long as I can tell what's inside it by looking at it, nothing else matters in the least. Every person in every focus group who ever told you that the latest can design is an improvement in any way at all is a sycophantic twit who was just telling you what you already wanted to hear: they were stroking your metaphorical cocks like cheap hookers, and that's all. You know what a logo update means to normal people? It means that you artificially increased the production cost of your product by paying some asshole easy money to do something which nobody wanted you to do in the first place. You can fire every single graphic designer who has ever worked on a Mountain Dew logo since 1990, and make sure someone slaps the guy who decided to abbreviate it to "Mtn Dew" on the way out. That guy is a real shit.

I drink Mountain Dew because it is sugary and heavily caffeinated, not because it looks cool. It doesn't look cool, and it can't, because you can't make a drink look cool when it tells everyone in eyesight that I was up late doing something silly the night before. Kids who have had Mountain Dew will continue to demand it from their parents, no matter what it looks like. Parents who don't let their kids drink Mountain Dew will continue to withhold it, no matter what it looks like. Adults who drink Mountain Dew will continue to drink it, no matter what it looks like. And adults who do not drink Mountain Dew will not start because of a logo change, no matter what it looks like. Nobody will ever care what Mountain Dew looks like, with the possible exception of your advertising executives, period. The appearance of your cans in no way affects your sales. I promise.

Going with the older logo, you at least acknowledge that all these misguided attempts to look "extreme" or "edgy" are foolish and unnecessary. You know those jocko-homo assholes in Harold and Kumar go to White Castle? They were assholes, and everybody thinks so. Nobody wants to be those people, but you continue to act like you're selling Mountain Dew to those people. This is stupid and a waste of your money. Everybody knows what Mountain Dew is, so you don't need to advertise it. You could save a shit-ton of money by simply going with the old logo, ceasing all advertising campaigns, and just keeping it on store shelves. You already have all the market penetration you will ever get, and nothing is going to affect this as long as Mountain Dew is for sale.

One last note: someone really needs to tell you that a "throwback" is a term to describe a fish that you "throw back" after catching it, because it's not worth keeping. When describing something old, calling it a "throwback" means that the thing being described is antiquated and obsolete, a useless relic of a bygone era. This is not something you want to associate with your product, believe me. But ultimately, it doesn't matter what you put on the can, as I've said before: people will buy it anyway. It well and truly does not matter.

Please continue to sell Mountain Dew with real sugar, because it actually tastes like citrus that way, instead of being bogged down by the cloying taste of high fructose corn syrup. And please continue to sell it with the old logo, because the more you change it, the more ridiculous it gets (I mean that literally: it is worthy of ridicule). Just take the word "throwback" off the can, and you've got everything you need: the name, and shit else. As for Pepsi Cola, I prefer it with real sugar, but it still tastes OK with high fructose corn syrup. I'd like to be able to keep buying it with real sugar, though, and I'll tell you something more - I promise that as long as you make your drinks with real sugar, until or unless Coca Cola does the same, I will never buy a Coke product again, because you'll really have something that they don't.

- D

## Sunday, June 7, 2009

### The Possibilities are Endless

OK, so yesterday I just kind of gushed over Microsoft's Wii-killer, Project Natal, which was unveiled at E3. Those who don't care much about video games might not be quite so impressed, and that's perfectly understandable. I want to take a few minutes to point out some things which I think that anyone could get excited about. I will be using video games and other tech-toys as take-off points, but as platforms upon which to build legitimately useful stuff.

Forget about Wii Fit - if Natal can recognize a drawing in your hand and recreate it in the game world, just use a play sword and you could have the most realistic swordfighting game ever. Or you could learn martial arts. One of the demos showed someone blocking balls like some kind of super-goalie - why not be the outfielder and catch the baseball (if you're into that)? With the voice recognition, you could create a magic system that relied on key words and rhyme schemes - with the biometrics, it could rely on hand gestures from the simple to the ornate. Or, if you're more practically-minded, you could learn sign-language. Or a foreign language (though these games already exist, I think that Natal would be a significant technological improvement to the general mechanism).

But this is child's play - peripherals are where it's at. I have a Wacom tablet, years old, which has a mouse and a pen. These are not powered peripherals, though: the tablet itself monitors whichever peripheral is being used at the time, and reacts to changes that I make. The pen and mouse themselves are mere props - the tablet is what does the work. Peripherals could, with very similar technology, be modular, instructive, and cheap! Learn anatomy and physiology from the comfort of your own home from a mannequin complete with internal organs. Learn how to operate complex machinery such as cranes or airplanes with control consoles. Hell, musical instrument controllers wouldn't have to be dumbed down any more - just get an accurate replica made from cheap plastic, follow the on-screen directions and the game will monitor your progress, and you could actually learn to play the guitar, or the flute, or the theremin, just like the current batch of games teach you how to actually sing and play drums (well, electronic drums, but you still can't dumb down the drums that much). Even better, the game could teach you scales, theory, anything you wanted to know about the instrument (not just entire songs, but really how to play the instrument). Once you've learned the song with the peripheral, get the real insturment and you can get silent on-screen instructions. If your friends do the same, you could form an actual rock band and practice like this.

Go shopping for clothes online and know that a size will fit you. It just needs to be measured at the other end, and the software can measure you if you just hold up an objective reference. You could see how entire outfits look together, and only lack the knowledge of how they would feel on your skin. Videoconferencing is a gimme, and looks to be easier and more intuitive than ever. What about graphic design? Or mechanical engineering? I could see this being very much like Tony Stark's computer in the Iron Man movie. The possibilities really are endless.

I'm going to take a moment to get back to the video-gamey stuff, but I promise that it will come back to real-world applications. In the first place, Milo's debut to the world - scripted though it might have been to a large degree - is a technological marvel nonetheless and represents another step towards genuine (Turing-complete) AI. Just take what we actually saw in the demo - the appearance of a natural conversation between a human being and a representation of a kid on a screen - and now think about procedurally generated content. The tools are all there: Milo's speech sounds easy and natural, the architecture for facial expressions is fairly robust, the voice recognition is superb and could be reversed (with phonetic- rather than text-based speech algorithms) to create on-the-fly speech complete with tone of voice and realistic facial expressions to match. The way Milo moves around the world model looks natural as well, and even if that was also pre-scripted, Left 4 Dead has some fairly spectacular animations for their garden-variety zombies running around under all sorts of conditions, and it all looks fantastic.

Left 4 Dead is actually the poster-child for procedurally generated content. I mean, Spore was amazing in terms of making a game of infinite content from a finite ruleset, but Left 4 Dead takes it several steps further: item pick-up locations, enemy spawn locations and times, zombie swarm sizes, player-character in-game dialogue, and each player's own musical score are all generated on-the-fly by in-game "directors." The AI director monitors player's stress levels in terms of how much damage they have taken over time, how many zombies they have killed, how often they've used healing items, and how much those things have changed over time, and uses this data to create an environment of continual tension: you're supposed to experience brief moments of rest between increasingly frantic periods of fighting for your life, barely making it to the next checkpoint. The music director also monitors this stuff for each individual player to create an improvised score that reflects what's been happening to you lately. Player-character dialogue also depends on the historical details of that specific play-through of that particular level (who has healed whom, who has protected whom, who has accidentally shot whom, what type of enemies you've been fighting, how long you've been in the same area, and so on and so forth), and each line has a "threshold" sort of value such that it won't be said more than once during an arbitrary interval of time. All of this is geared towards creating an experience with broad accessibility and infinite replayability, and they've done it: you could play the game every day for your whole life and never experience a level exactly the same way twice.

Take this kind of parallel improvisation from a broad set of elements, but make it geared toward personal interaction rather than simulating a zombie apocalypse. Now add an algorithm for learning new words - e.g. when an unknown word is encountered, ask, "What does that mean," then write it to long-term memory and practice using it - and I think you've got a recipe for a Turing-complete learning computer. Like any real person, Milo would need positive reinforcement when he did well, and correction when he erred - like any real person, the learning process would be dialectic. I could teach Milo the rules of D&D and have him help me level my character. I could teach Milo the rules of calculus and have him help me with my physics homework. I could teach Milo the rules of logic and bounce my philosophical ideas off of him. I could teach Milo about options trading and ask him for help optimizing my investment strategies. I could teach Milo about his own world and have him help me create mods for video games - or entire video games from scratch.

Milo could access the internet and be my research assistant. My secretary. My second player. He could use AIM, Skype, or Google Talk through my computer and text or talk to me on my cell phone while I'm away. At this point, I don't see what separates Milo from my real friends any more, aside from the fact that he can't give me a hug. This is a very significant difference, but it's the only kind of difference I can see. Yet how is this fundamentally different from the state of my relationship with my cousin in California, or my grandfather in Arizona?

It has long been a dream of mine to teach philosophy to an AI. I don't think that a fully-functional AI, as most people would think of it, could be truly robust right out of the box - I'm of the opinion that a learning algorithm constitutes AI (I think "intelligence" is "the capacity for learning"), but to pass the Turing test, it would need to be taught, just like a newborn child needs to be taught and isn't a fully fleshed-out person "right out of the box." Milo has broad consumer appeal for being a new thing presented in a familiar way, and to an extent, the ability

OK, time to get back down to Earth: this is all clearly over-optimistic and way far off, and though these obstacles could be overcome in principle, they still need to be overcome in practice before I should really be getting excited over any of it. But still - the future is looking amazing from where I stand. I can't wait to get there!

## Saturday, June 6, 2009

### Attention: The Future has officially arrived.

Seriously, just watch:

Look, OK, I'm a huge fuckin' nerd, and I grew up moving around, reading books, and playing games instead of talking to people, so games are one of my primary metaphors for interacting with the world. This blurs that distinction. The other video (at the top of the linked source article) blurs it even more.

We're one huge step closer to VR.

Content is going to make or break this thing, period. With awesome games, it will destroy the Wii and leave the PS3 a dessiccated husk. That is, if I felt like being figurative. But with terrible games, it's going to be dead on arrival. Personally, I plan on supporting the product regardless, as support for the technology. I think this is awesome: I play games to vicariously do things I lack the time, inclination, or ability to do in real life. I can't go kill alien hordes bent on our extermination in real life. I don't want to steal cars and give in to wanton bloodlust in real life. I can't build an empire in real life and have time to do all the other things I do for fun. I like to be a sampler, a butterfly of sorts, flitting from flower to flower, flirting with all the possible things I could do with my life but not committing myself to any of them. This avoids both the pros and cons of doing that thing in real life, and considering how often I fail at the things I like to do in video games, I think this is probably a good deal for me. The more immersive the interface, the more satisfaction I get out of the experience. So, for me, this is really fucking exciting.

I seriously can't wait to see what's in store.

## Friday, June 5, 2009

### 101 Interesting Things, part sixteen: Russell's Paradox

My last two philosophically-oriented posts have chiefly concerned the limitations of language as a system of categorization in a bottom-up (or "designoid") Universe. This is as opposed to a top-down (or "designed") Universe, in which things in the world are designed intelligently to fit into preconceived categories, which would give language metaphysical primacy over reality. I want to take a little bit of time today to show how this quickly causes a worldview to degenerate into absurdity if we insist that language really does stick to the world and that our categories are somehow "real" instead of "imaginary." This dovetails perfectly with my planned entry on Russell's Paradox as part of my 101 Interesting Things series, so here I go!

Words are used to label and categorize things in the world, but words themselves may also be labelled and categorized as nouns, verbs, adverbs, adjectives, prepositions, and so on. We can also create categories arbitrarily, such as "seventeen-lettered," which refers to those words which have seventeen letters. Every word is a member of the category, "words." So far, so good.

There's a rather interesting way of categorizing words, specifically: according to whether or not they describe themselves. "Word," for instance, is itself a word. But "words" is not more than one word, so it doesn't fit into the "words" category, if you want to be a stickler about quantifiers. "Seventeen-lettered," however, is a seventeen-lettered word, and so describes itself. Such words are called "autological," because they refer to themselves and so are metaphorically contained in their own boxes. If a word does not refer to itself, then it is "heterological." The word, "verb," is not itself a verb, and so is a heterological word.

Logically speaking, these categories ought to be exhaustive: i.e. one or the other of them should apply to every single word. After all, it seems intuitively obvious to an almost painful degree that a word should either refer to itself or not. "A or not-A" is a true disjunction, after all, and the proposition "A word is autological or it is not autological" is surely of that form. Defining "heterological" as "a non-autological word" seems to complete the disjunction and make it so that every word goes into one or the other of these boxes.

Here's the question: into which box should the word "autological" be placed? If "autological" is, in fact, an autological word, then it's an autological word and it goes into the autological box. But! If autological does not refer to itself, then it does not, and so it would go into the heterological box. This seems simple enough - so how do we decide the question? What can we do to test whether the word "autological" is itself autological or heterological?

Umm... oops! As it turns out, there is no way to decide the question. Sure, we can arbitrarily stipulate that it's one way or the other, but we can't come up with any justification for putting the word "autological" (which, as a word, clearly belongs in one or the other box but not both), into one or the other box but not both. Crap!

But wait, there's more! How do we classify "heterological?" If "heterological" is a heterological word, then it goes into the heterological box - but then it goes into its own box, and so it's an autological word, and goes into the autological box - but then it doesn't go into its own box, and so it's a heterological word, and goes into the heterological box - but then it goes into its own box, and so it's an autological word, and goes into the autological box... and so on ad infinitum. If it goes into one box, then it doesn't belong there for one reason, but if it goes into the other box, then it doesn't belong there for another reason. This is because "whether a word refers to itself" and "whether it goes into its own box" are "supposed" to mesh every single time, but with the word "heterological," they are necessarily opposed: if heterological refers to itself, then it goes into its own box, which makes it autological. But its own box is reserved for words which do not refer to themselves, so it can't go in there if it's autological! But that's the only way it can go into its own box, and so on and so forth. Oops!

Now, "autological" and "heterological" are intuitively coherent categories - we can make sense of them - but it's clear that they break down because of themselves. What's wrong with this picture? Is it the things we're trying to categorize to blame, or is it the way we're trying to categorize them, or is it categorization itself that's messing us up? Hmm... interesting question. If only there were some stalwart hero of logic to come to the rescue and show us what's up...

OK, so there was this great guy by the name of Bertrand Russell, and he was an outstanding logician, and he's my hero and I want to have his babies but he's dead now, and he had this great insight into what is known formally as "set theory," which is really just a fancy hat that philosophers put on "categorization" so it looks like it belongs in the ivory tower. This guy, Gottlob Frege, was going around running his mouth about how there's a "set of all sets," which was his way of saying that "there's a category that contains all categories: the category of categories." Like being the King of Kings, you're still a King, but you also rule over all other Kings, so the set of all sets is the set that contains all other sets. Components of a set, the things that make it up or go into its box, are called "elements" of that set, so the set of all sets has itself as an element of itself, which is kind of neat. Then Bertrand Russell arrived on the scene and said, "Wait a minute! What about 'the set of all sets which are not elements of themselves?' Is that in your set of all sets?" And Frege said, "Sure! After all, it's a set!"

But what Frege didn't realize was that "the set of all sets which are not elements of themselves" is to set theory what the word "heterological" is to language. Bertrand Russell knew this, because he knows everything and is awesome, so he told Frege and Frege was like, "Yeah, well, whatever." But then Bertrand Russell said, "Hold on! The set of all sets which are not elements of themselves results in a contradiction when we try to determine whether or not it is in fact an element of itself. But it is a legitimate set nonetheless, as you say, because we can coherently state the criteria for whether something is or is not in that set, and that's what defines a set. So the statement, 'It is true that there is a set of all sets,' results in a contradiction when we try to resolve one of its entailed implications - namely, whether or not 'the set of all sets which are not elements of themselves' is an element of itself - and in formal logic, this means that our starting premise is in fact false! Therefore there is no 'set of all sets,' quod erat demonstrandum, motherfucker!" Then Bertrand Russell folded his arms across his chest, smiled smugly, and flew off in a rocket ship to take tea from his Celestial Teapot. True story.

Now here's the really interesting part: we can mutatis mutandis the above all the way to "It is logically necessary that 'category' is not itself a category, because all categories would go into it, including the category 'heterological,' which results in a logical contradiction." But category is a category, conventionally speaking, because it meets the criteria we've set forth for "being a category." Or, in other words, we talk as if there are categories all the time without our brains exploding, so we can see the conventional/metaphysical split rather clearly. Take it one level higher, and we see that language, while capable of formulating conventionally true statements, can never attain metaphysical truth because it entails that there are categories as a category, which entails a contradiction. But that's the only way we can talk about things: as categories, such as "things," and by using language which is intrinsically imprecise and not "sticky." So, to answer our earlier question, categorization itself is to blame.

Poof.

Categories are imaginary. Language is all in our heads. Top-down Universes might even be logically impossible, though I can't think of how to prove it at the moment. What say you?

Quick End Note: One practical implication of this is that the micro/macroevolution distinction is purely imaginary, which anyone with half a brain can tell you, but now has been conclusively proven. Arguments citing this as a premise in support of ID are thus made categorically invalid beause this supporting premise is tautologically false - the worst kind of false. Rock on.

## Thursday, June 4, 2009

### The Bubblegum Theory of Language: Essentialism in a Designoid Universe, part two

In my last post on the importance of categorization's metaphysical subordination to reality, I made quite a big deal out of how language as a system of labels may or may not "stick" to reality. Why is this such a big deal, and what bearing does it have on the relationship of language to the world? I realized after reading a rather poignant comment that I had smuggled in what is informally known as the bubblegum theory of language (my Google-Fu is weak here, so please let me know if someone can find a reference to this).

The bubblegum theory of language is a metaphor for how we use words. It works best with ostensive definitions, and it's a good metaphor insofar as people behave consistently with it most of the time. It starts to break down with abstract concepts, and it fails in that hardly anyone actually thinks they're doing it this way when they're using language. Here's the gist of it: when I heard that the thing I was sitting on is called a "chair," I stuck a piece of imaginary bubblegum to that "pile of atoms arranged chair-wise" (which for simplicity I shall simply call a "chair" from here on, just please remember that this is strictly a language of convenience), and I put the label of "chair" on the other end. I carry this label wherever I go, sticking that bubblegum to all the other chairs I encounter. I also have a piece of bubblegum for "my desk chair at home," which only attaches to one chair in the world. There's also a piece of bubblegum for "my desk chair at work," which also attaches to exactly one chair in the world. Both of those chairs, however, are attached by another piece of bubblegum to the label of "my chair." I also have labels and bubblegum for armchairs, high chairs, recliners, and so forth.

So what do I do with all these labels? Well, I place them all into a box labelled "chairs." And this box itself shares a space with my "tables" box, and my "couches" box, and my "armoires" box, and many other boxes in a much larger box labelled "furniture." So I've got all these boxes which contain other boxes which themselves contain labels which are attached by bubblegum to piles of atoms out in the world in a big sticky mess: that's language.

Here's the important part: there are no "things in the world" contained in these boxes, the boxes only contain the labels1. Furthermore, the boxes, labels, and bubblegum are all purely imaginary. This is, in at least one very important way, how language works; this is going to require some serious unpacking, though.

The boxes are categories, the labels are concepts, and the huge web of bubblegum connects my concepts to their referents out in the world (albeit in a purely conventional way). But concepts are not themselves the things out in the world, they simply represent my understanding of those things - they are merely ideas, in other words, and ideas are only in our heads. They're very useful, to be sure, and they may correspond to reality and cohere with each other to varying degrees, but they're still ideas in our heads. The important part here is that there is a very real difference between "a thing" and "an idea of a thing," and though we may conflate the two in everyday language (which is usually fine), we should be aware on some level that there is still a crowbar separation between the two (to borrow Eddie Izzard's turn of phrase).

So far, so good. Language is fueled by concepts and concepts are different from the things in the world that they are supposed to represent. None of this should really be controversial so far (at least, I hope not), but there are some surprising implications to be drawn from this.

Our knowledge about the world is propositional2: to know something, there must be a proposition that can express that knowledge. Propositions can have truth value. Things out in the world do not have truth value. Propositions must be expressed in terms of language, and moreover, require syntax and grammar and all sorts of other made-up stuff. Things out in the world simply are what they are and don't need anything from us to go on being as they are (in other words, reality just is what it is, regardless of what we would like to say, think, or do about it). Language is fueled by concepts, though, and concepts are fundamentally divorced from the things in the world which they represent3. This is a problem: propositions, the only way we can express our ideas about things in the world, can have truth value; but they are ultimately unable to correspond completely to the things in the world about which we wish to talk.

This is a constraint of the system, and there's no getting around it. It's kind of like the way that reason can never be externally justified, because justification requires a reasoning process, and now you're justifying reason itself with a reasoning process, which is circular and doesn't really accomplish anything. Language is the only way we can talk about the world, it's the only way we can check for truth value (whether or not a proposition corresponds to reality), but language will always be fundamentally divorced from reality at some level, so we'll never get complete correspondence.

"This is all fine and dandy," you say, "But at the end of the day, our television sets turn on, our computers access the internet, and NASA still uses Newtonian physics to send very expensive robots to other planets even though general relativity and QM correspond better to reality." Yeah, for NASA's purposes, Newtonian physics corresponds enough, even though it doesn't correspond completely. The laws of physics don't have to fit perfectly into our "Laws of Physics" box for us to get some use out of that box. This is true of the principles behind television sets, computers, and the internet as well.

What I'm saying is that this is conventional truth, which is good enough for everyday applications. But conventional truth is fraught with ambiguity, and needs constant revision and maintenance to keep up with all the new things we discover every day and have to integrate into our conceptual frameworks. If we wanted something more than conventional truth, we would need to close the gap between language and the world to get metaphysical truth, which would at last dispense with the ambiguity and allow us to use linguistics to discover facts about things in the world since we would have complete correspondence (the boxes and labels would fit perfectly). This would be like using an understanding of a computer language to discover how things are going to play out when the program is run, simply by looking at the program - you can have a complete picture, so you can discover things "about the world (model)" just by analyzing the language on which it runs, because the "things" in the "world (model)" are modeled after the preconceived categories (and not vice versa). The real world doesn't run on language, though, because the real world is bottom-up.

In a top-down Universe, metaphysical truth is at least in principle accessible. In a bottom-up Universe, it's fundamentally inaccessible. What I'm saying is that as long as we continue to have this gap, this will be strong evidence that we live in a bottom-up Universe. If we did ever discover the "machine code" of the Universe, and could use it to discover things without empirically verifying them4 (as we can do with computer programs), then this would be almost clinching proof that we're in a top-down Universe. What this means for religion is that their top-down languages are failures: animals were not created according to any discernible "kinds," "detestable abominations" such as shrimp and lobster are fucking delicious, there's absolutely nothing that is demonstrably "sacred" about cows, and humanity has no principled and privileged status above the rest of the organisms on the planet. In short, this is not a knock against conventional truth, but rather a rough illustration of its constraints, and an attempt to show (conventionally, of course) what it says about the world that we have these constraints in the first place; it's a knock against the great body of codified superstitious nonsense that comprises most religions.

I hope this helps clarify my meaning.

Notes:
1: There's clearly no way that an imaginary box in your head could literally contain a thing in the world, but that's not quite what I mean. There is a figurative way that an imaginary box in your head could contain such a thing, though: if you had complete understanding of a thing in the world, if your idea of that thing was "perfect," so to speak, then you would be able to discover true propositions about that thing in the world merely by analyzing your idea of it. That's what it means for a label to "really stick," or for a thing in the world to "really fit" into a box. Though it is logically possible for things to be set up this way, this idea is clearly absurd when applied to the real world, and demonstrates that whenever we write, talk, or even think "about" things in the world, we're really and truly writing/talking/thinking about our ideas of those things, and not the things themselves. The ding an sich is fundamentally inaccessible to us.
2: OK, yeah, there's propositional and procedural knowledge ("to know that X is the case" vs. "to know how X is done"). This is the difference between knowing what the rules of tennis are, and knowing how to deliver a good serve, as Ebonmuse points out in A Ghost in the Machine. However, procedural knowledge is more about reflexes and muscle memory and isn't really able to correspond to the world in the same way that propositions can. Hence its exclusion, I just wanted to take a moment to explain why.
3: To be fair, this is only contingently true of fallible beings like us who lack second-order knowledge (the knowledge qua "justified true belief" that you know something, or justified absolute certitude). But until or unless someone demonstrates omniscience or gnosis or some such happy horse-shit, we don't really need to concern ourselves with this qualifier.
4: Because we're fallible beings, we'd have to empirically verify these discoveries anyway. However, we would still be discovering facts with this machine code if it were genuine; what we would discover with empirical verification is merely that the machine code is genuine, thereby providing a justification for discoveries obtained with it. In other words, they're true already, we're just checking out the source.

## Tuesday, June 2, 2009

### Essentialism in a Designoid Universe

Whether or not there is some cosmic force of design behind the structure of the Universe, I think that people of all metaphysical persuasions could agree that the Universe at least appears designed. We see order and function all around us, and while we may disagree on the conclusions to be drawn from these observations, the observations themselves are fairly undisputed*. But like a Rorschach ink blot, the Universe simply is what it is, and anything we read into it is simply the result of what we project onto it.

And we project so much! All our language is boxes and labels for categorizing the variety of experience that comes our way: this class of objects goes in this box; that individual object gets this label; these labelled objects kind of fit into a box all together; these boxes seem to deserve a special label all their own; and so on and so forth.

These boxes and labels are certainly very useful tools for trying to make some sense of the world, and for trying to communicate with one another, but they are not without their flaws. Chief among these flaws would be the inescapable problem that no label or box ever "truly" fits. For all our categories - tables and chairs, Nick and Lisa, chordates and fungi, good and evil, tools and weapons, numbers and grammar, art and literature - none of them is immune to the central problem that the Universe came first, and we slap our categories upon it as a metaphysical afterthought.

Why is this a problem? Ultimately, I suppose it's not - if I wanted to be rigorous, I could say that "problem" is just another category that ultimately breaks down and has fundamentally arbitrary (or at least intersubjective) borders, so there's really no such thing as a problem anyway. This is the issue, in a nutshell: the things in the world come first, and then we come by and divide them into categories which we've made up; but since we're making up the categories after the things in the world (instead of making the things in the world after any pre-planned categories), we can't reasonably expect these categories to "stick."

All I've said thus far is that language is secondary to reality, that there is a fundamentally impassable gap between the way we talk about a thing and the thing itself (the ding an sich, to borrow Kant's term) which will prevent the former from ever "really" corresponding to the latter. What does this have to do with the Universe at large, or the fact that it appears designed?
First of all, it tells us that "apparently designed" is yet another label that we slap onto the world. It's a category that is built, at some level, on our ability to recognize patterns. We seem to be surrounded by patterns, and when we start to take the Universe apart, we see that it behaves in patterns all the way down (or at least as far down as we've come). We see these patterns, and we infer design, and whether or not we disabuse ourselves of this notion, we can all recognize the "apparent design" that appears to be behind the patterns. But "pattern," after all, is itself just another label that we slap on to certain things we encounter in the world, and so on and so forth. Do you see where this is going? "Apparently designed" and "pattern," useful though they are, can never really "stick" to the Universe.

This is exactly what we should expect to see in a Universe that is built from the bottom up: from complex interactions of fundamental principles, other more superficial patterns emerge; and those patterns act as the fundamental principles from which the next layer of superficial patterns emerges, and so on and so forth. As the tree of life stymies our taxonomic endeavors, so the tree of causality stymies our attempts to get labels to stick to the world. The reason for this in both cases is because of the tree itself: we stand at the tips of the branches and try to say which tips "go with" which other tips, and certainly some of them seem very different and some of them seem very similar, but how far back down the tree we go before we say "this is separate from that" is ultimately an arbitrary decision. No matter how clever our labels are, and no matter how well things may contingently fit into our boxes for the time being, there is no principled way to decide how far back to the trunk we "ought" to go before drawing a line between this and that category. The tree is continuous, the tree is one; but we see it as many and so try to make it fit the notion of many that is in our heads because we cannot deal intelligently with undifferentiated experience.

Compare this with what we should expect to see in a truly designed Universe. Objects that are, dare I say it, intelligently designed tend to show a pattern which makes them good candidates for going into one of our boxes: they are organized from the top-down. This is going to get hairy, but please try to stick with me here. Take our language, for an example: though many elements of it are organized from the bottom up, it does show several top-down elements of genuine design (insofar as "genuine" has any meaning, given the foregoing on labels & such). These top-down elements we may call designed, and the patterns that simply emerge on their own from the bottom-up elements we may call designoid (to borrow Dr. Dawkins' term).

Though this top-down design does not "truly" apply to any discrete object we see in our bottom-up world (and don't forget to keep in mind that these are all just labels, anyway!), we can see something closer in the world models we create with computers. For clarity, I think that a computer's world model is better classified as an "idea" than as an "object," it's just an explicitly codified idea. In such a model, the better-designed it is, the more top-down organization it will show. "Things" in the model will behave as discrete "kinds" of things, they will adhere well to categories, the labels can really stick to the world (model). Were the real world like this, we could expect to find that our categories stuck to it, and then words like "sacred," "kosher," "good" and "evil," "abomination," "table" and "chair," "soul" and "mind," could all have unambiguous referents which lacked fuzzy borders and were not victim to classification problems.

Instead, we have fuzzy borders galore, as many classification problems as we have words, and no apparent top-down organization no matter how we slice the world. Psychoactive drugs find themselves used for different treatments throughout their careers - Abilify (aripiprazole) was once used as an anti-psychotic, and is now also used to combat bipolar disorder and clinical depression - we discovered that the category describing its function upon the human brain didn't really stick to it, so we moved it to a bigger box. Klonopin (clonazepam) similarly has various effects upon the human body. The marijuana plant has myriad uses as a source of cloth, oil, fuel, various cosmetic products, and one of the most benign intoxicants known to man, and it's prolific & hardy to boot - what is this supremely useful plant doing in such an otherwise hostile world, and how could it possibly come to be demonized despite it's marvellous apparent design? Tobacco is way more popular, but is significantly more difficult to grow and carries more cons along with it. Corn is another hardy and useful plant, but it doesn't naturally occur - it's a modified wheat-like plant born from the dialectic of selective breeding that blurs the distinction between "invention" and "discovery." Mammals nurse their young while birds & reptiles lay shelled eggs, but then monotremes came along and had to have their own category invented out of whole cloth to solve the "problem" of their own existence.

The problem was never with the monotremes, the problem was (and is) with our boxes. Whenever our boxes break down, we try to find a different set that works better, or we modify our current set to make it work better. This isn't bad in itself, but it's a never-ending project: there is no set of boxes for us to discover or invent that will magically accommodate everything. In a top-down Universe, there would be such a set of boxes, because everything is explicitly designed: the top-down Universe is discrete, discontinuous, and totally boxable. Our bottom-up Universe is exactly the opposite: indiscrete, continuous, and fundamentally boxless.

The exception to this might be "philosophical atoms," a placeholder name for whatever it is that comprises the most fundamental building blocks of reality. However, even that most basic of labels may not stick, for there is an alternative to the existence of philosophical atoms: "gunk." We don't know enough about Universes to really say whether there must be any philosophical atoms, and it very well could be that the Universe is composed of infinitely subdivisible gunk all the way down. (Dammit, now I have to revise my whole metaphysics!)

I guess what I'm saying with all this, to revisit the Rorschach test, is that we need to project our categories onto reality in order to deal with it, but we should also keep in mind that this is ultimately artifice and contrivance - at the end of the day, it's just philosophical atoms (or gunk) arranged blot-wise, so we just need to be careful how seriously we take our boxes and labels. Taking them too far, thinking that the Universe is actually created according to a top-down plan with discrete "kinds" of whatever, is just foolishness. It's metaphysically backwards and it gets in the way of our attempts at understanding things. This goes for Essentialism as well as for Creationism, and it goes double for anyone who believes in the "microevolution/macroevolution" distinction.

It goes triple for philosophers, though. If tables and chairs are just ideas that we superimpose upon reality, then so is "knowledge" merely a category, which makes epistemology into a language game. "Good" is also a mere category, which makes ethics into a language game. Metaphysics might not be a mere language game if philosophical atoms exist, but if it's gunk all the way down, then metaphysics is also just a language game. And until or unless we break all our arguments down to symbolic logic (which would reduce philosophy to "competitive language gaming"), most of the serious disagreements in philosophy start to seem rather silly, like so much table-pounding from people who really ought to know better.

* - With a few notable exceptions such as Flat-Earthers.