Tuesday, March 24, 2009

Questions for Physicists, part two: Quantum Uncertainty vs. God, History, etc.

Let's start this one off with a little bit of background. And by "a little bit of background," I mean "a crash course in quantum physics." First, I think everyone's heard of the Heisenberg uncertainty principle: certain physical quantities, like position and momentum, cannot both have precise values at the same time (taken from the Wikipedia page). For instance, if you measure an electron's position, then you don't know its mass or velocity; and mutatis mutandis for any other information you could care to know about it: measuring one thing makes other things uncertain, in other words. This applies mainly to really tiny bits of matter that we want to look at one bit at a time, not the macroscopic objects we typically encounter in everyday life. The reason behind the uncertainty principle is that tiny bits of matter appear to behave as particles at some times, and as waves at others, as demonstrated in double-slit experiments. For those who aren't savvy, here's a well-done video that explains it in roughly the order physicists discovered these principles themselves:
OK, so observing things has an effect on them, fine and dandy. Well, a bunch of physicists started criticizing the uncertainty principle, saying things like, "Oh, your equipment just sucks at these tiny levels," or, "Hey, you're just using too clumsy a method to make your observation." In other words, the methods and equipment were being criticized, and everyone thought we just needed to invent more precise equipment, or discover more precise methods - until Einstein came along. In 1935, the physicists Einstein, Podolsky, and Rosen wrote a paper outlining what is now called the EPR paradox. This paper said, in a nutshell, "No, you're doing it wrong!" The upshot of it was that the observations we were obtaining were not due to shoddy equipment or anything of the kind; rather, the critical physicists didn't properly understand the uncertainty principle, and so weren't taking it seriously. An experiment was laid out that could settle the matter empirically by testing quantum entanglement - but Einstein had no idea as to how such an experiment could be conducted.

Enter John Bell, who was able to devise a method for performing the experiment in 1964. This class of experiment became known as "Bell test experiments," and has shown some interesting and confusing things about the world we live in. To make an already-long story medium-length, I'll skip the technical stuff and get right to brass tacks. Basically, we take two quantum-entangled particles, let's call them A & B. We measure something about A, say its mass, and then we measure something about B, say its velocity. Then we go to measure the mass of A again - and much to our surprise and philosophical chagrin, A's mass has changed. What this means is that, in a state of quantum uncertainty, things aren't just too subtle to measure without changing - they are actually probabilistic (more technical and more surprising, quantum entanglement is an observable phenomenon!). The research strongly suggests that there is actually an element of irreducible randomness to reality.

There are, of course, various interpretations for these results. Any finite set of data has an infinite set of possible interpretations, after all. These interpretations fall into three main lines: the "hidden variables" interpretation (endorsed by Einstein, but losing ground), the "many worlds" interpretation (which has a variety of metaphysical & epistemological convolutions all its own), and the "Copenhagen interpretation" (which posits metaphysical chance). The many worlds and Copenhagen interpretations are epistemologically indistinguishable, as I shall soon demonstrate, and also seem to be "fallback positions" should the hidden variables interpretation be soundly defeated at some point. Since the hidden variables interpretation is the simplest, I shall explain it first: these apparently probabilistic events are in reality governed entirely by some as-yet undiscovered set of principles which, upon their discovery and elaboration, shall show quantum uncertainty to be an illusion. It leaves a lot to be desired, sure, but what it's got going for it is that it's really close to the default position of science (as opposed to, say, fucking magic). But this view is losing ground, as our ever-closer and ever-more-careful observations of the world have been tending to lead us away from a fully deterministic world model.

The Copenhagen interpretation (there are actually several, the term is fraught with ambiguity) posits metaphysical chance as the explanation for these observations. Reality, on this view, contains some irreducibly random component that is fundamentally incompatible with a fully deterministic worldview. "Chance," it would seem, is not merely a way of talking about the limitations of our knowledge, but a fundamental part of the Universe. Let us use the metaphor of a deck of cards. In the everyday world, every deck of cards has a certain order which is determined entirely by the processes of its previous arrangements (whether by shuffling or otherwise). If one knew the initial conditions of the deck (or the conditions of the deck at any time T) and was able to precisely observe and track the deck's arrangement through time up until the present, then one could know with certainty the arrangement of the deck at present. It so happens that our inability to observe and track such minute events as the shuffling of a deck of cards prevents us from knowing its current arrangement, despite the fact that we know how the last hand went. In opposition to the hidden variables interpretation, the Copenhagen interpretation is the idea that not only are we simply incapable of tracking the shuffling of our quantum deck, all the cards are blank until we look at them. In a state of quantum uncertainty, prior to any measurement, there is no fact of the matter. Period. This is what is meant by metaphysical chance.

But it goes deeper than that. As the Bell experiments show, a "fact of the matter" emerges when an observation is made. As a young, naive physicist, I once postulated that matter behaved as waves until some sort of interaction behaved as a "call function," saying more or less, "Hey, you! Collapse your waveform, already! You're supposed to be particulate for this!" The problem with this position is that all matter in the Universe is always interacting with other matter in the Universe. It doesn't matter how much we isolate an electron from other charges, or the atmosphere, or cosmic background radiation - we have found no way to isolate an environment from ambient gravity, which means that it's not mere interactions with other matter that make the difference. No, what seems to be the "special" thing about the observer effect is that the observer is conscious. These are the interactions that matter, and this is friggin' strange. But enough on the Copenhagen interpretation for now.

Straddling the fence between the two is the fully deterministic and fully probablilistic mind-bender known as the many worlds hypothesis. In short, this is going to be weird. In "less short," the many worlds hypothesis is the view that all possible futures actually happen "on top of" each other in the Universe; or, put differently, that any "world-state" can be captured by an internally consistent mathematical model, and the "real world" is composed of all such models. This interpretation, I think, is best explained by a diagram:
This is the world!
The above image represents an extremely simplified (but complete) "many worlds" world. Also, I'm really sorry1 that sloppy MS Paint drawings are so much fun for me. OK, so T represents some initial condition, like the beginning of the universe or something, and the vertical axis represents time - time goes down here, because for something to "go down" means for it to "occur." Or give head. Your choice. Anyway, each horizontal line represents the smallest meaningful interval of time, which is the time between quantum events. Now, here's what's crazy: an observer at A can look back and see a single path going back to T, but looking forward sees all possible futures, a couple of which end abruptly, some of which go on for a while to be radically different from each other. But at A, our observer has but a single "option" open: go to A' or go to A'' (for the benefit of the uninitiated, these are pronounced "A prime" and "A double-prime," respectively). To which path does our observer travel down? Both. In fact, our observer at A travels to all endpoints in the world (assuming that death and destruction are scrupulously avoided along the way, of course).

Now, let us posit our observer at A'. Our observer still sees but a single path going back to T, but can only look forward to all paths proceeding from A', and none of the paths proceeding from A''. Vice versa for an observer posited at A''. So which is the "real" observer from A, just one quantum event earlier? Well, this is just the paradox of the ship of Theseus in fancy dress: our observer at A becomes both our observer at A' and our observer at A'', and neither of the derivative observers has any privileged claim to being "the" observer from A. In fact, if we want to play rigid designator with our identities, they're all the "same" observer. Time for a pop quiz! Suppose that you, personally are at A', going to one of the two points designated B: even with perfect knowledge of the possible states of the world, as well as the laws of causality and probability, can you predict which point B you will go to?

The clever reader will note that this is a trick question: you go down both. It is only by conceit of hindsight at either of the points B that you can describe a singular "you" as having come from A'. Similarly, the "you" at A travels to all points B and C, and only by conceit of hindsight at any of these may you say that "you" travelled down any particular path, because the "whole you" goes down all of them. Now here's the upshot of the diagram: since the "whole thing" is the complete world, every quantum event goes every way it can, and the act of observing quantum states (i.e. collapsing the waveforms out of quantum uncertainty) serves only to tell us what path we have just come down. Under the many worlds hypothesis, all the quantum states that can possibly happen, do actually happen2. This means that an "un-simplified" world diagram would be a nigh-infinite-dimensional idea space describing all logically possible states of the Universe, all of which are equally real, all of which truly exist as much as any other. Capisce?

I told you it was weird.

As promised, my penultimate note will be on the epistemological indistinguishability of the Copenhagen interpretation and the many worlds hypothesis. If you've stuck with me this far, but didn't understand that last sentence... well, you know what? I'm willing to say that that's entirely my fault - I'm trying to write this for the layman, after all. What I mean is that, if either the Copenhagen interpretation or the many worlds hypothesis was true, we could not tell the difference between them. The reason for this is that, under the many worlds hypothesis, we can only describe the future in terms of a probabilistic idea space (this is in fact what the many worlds hypothesis says reality is like). However, in hindsight, we will see only a series of quantum events that played out probabilistically. But under the Copenhagen interpretation, we can only describe the future in terms of probabilities, and if we were to fully describe all possible futures, this would be exactly the same as the many worlds idea space. And looking back, we would also see a series of quantum events that played out probabilistically - not due to some conceit of hindsight, but because this is what the world is actually like. To strain the language of convenience between two mutually exclusive possibilities, "we" would "experience" only one "actual path" down the idea space, but could only describe this path by looking backward, which looks the same under either model. Or, in other words, if you're standing at A', how could you possibly know there's actually an observer at A'' who diverged from you at A, as opposed to there being only you at A' based on a cosmic roll of the die?

Pro tip: you could do no such thing.

OK, so: here are my questions for physicists, at long last! First, how could we really give the hidden variables interpretation a sound thrashing? And suppose we did manage to falsify it, how could we then distinguish between the Copenhagen interpretation and the many worlds hypothesis? Furthermore, if the many worlds hypothesis is true, then how are we to reconcile this with the laws of thermodynamics (specifically, the second one)? Moreover, if the many worlds hypothesis is true, then that gives every logically consistent counterfactual a concrete referent, so why do I want to punch David Lewis in the face? (Sorry, I couldn't resist the esoteric philosophy joke.) And if the Copenhagen interpretation is true, does this mean that there simply was no fact of the matter regarding the Universe as a whole prior to conscious observation? (And if so, how did conscious observers, in fact, come about?) Finally, doesn't the Copenhagen interpretation necessarily exclude an omniscient deity? I mean, I'm pretty sure it does. Just sayin'.

I know it's been a long post, and my deepest thanks to those who stuck it out with me. I hope you can see, however, that the last paragraph would have made exactly zero sense to non-physicists without nearly all of the foregoing. Anyway, enjoy involuntarily reevaluating your conception of reality! Catch you next time around!

Notes:
1. At this point, I started drawing my MS Paint diagram, and I should really point out that MS Paint is like playing with crayons for me - it makes me giddy. As schoolgirls are giddy. Point is, I couldn't resist the silliness in the following paragraph. Hell, I still can't. It's got that whimsical MS Paint diagram above it, playin' at quantum physics! That's just too cute!
2. Full disclosure: the green, blue, pink, purple, and brown lines were all drawn just for fun. I actually drew out the whole diagram before I had even planned what to write (I'm a bad writer, I know), and I drew most of it because I felt like it. I'm telling you: like playing with crayons.

Sunday, March 22, 2009

Questions for Physicists, part one: The Lorentz Factor vs. Relativity

As I alluded to in my previous post, I used to be a physicist. I never got a PhD, or even a bachelor's degree, but as a freshman physics major intending to teach physics to high school students (and going to a public university with a modest scholarship and zero parental assistance), I needed money. Because I had a very high ACT score, I was offered a position in a research unit doing work for the university in theoretical physics. I even made a little discovery, in that I saw all the numbers we were putting into a computer and recognized the relationships between those numbers, so I was able to make a couple semi-educated guesses as to how the numbers could be juggled to yield interesting results, and one of those guesses panned out. So because I was doing physics, I feel justified in saying that I was a physicist.

I ended going in a different direction, but the point of all this is that I still like science, so from time to time I come up with some interesting questions that involve putting one esoteric bit of knowledge together with another esoteric bit of knowledge. One such question arose while writing about the Lorentz factor, and the other one that I'm going to discuss later arose while discussing the Bell experiment with some friends. To be sure, these questions probably have answers, but some cursory Google searches haven't yielded anything promising, so I'm going to be approaching some old contacts in the physics department to find out what's up.

OK, first question: does the Lorentz factor imply that absolute velocity exists? My last post explains the Lorentz factor (γ, the Greek letter gamma) in some depth, so I won't re-tread that old ground. What I will take a little time to explain is relativity: in particular, one of the upshots of relativity is that time and motion are relative, in that you can't speak of an object's velocity without respect to some observer (or "frame of reference"). On Earth, we speak of velocity relative to the Earth; in the Solar System, we speak of velocity relative to the Sun; in the Milky Way, we speak of velocity relative to the galactic core. But each of these scenarios suggests an obvious frame of reference, specifically, the body around which we are travelling. Let's take an example where a frame of reference does not so easily present itself to see what I'm talking about.

Suppose that a spacecraft is travelling from planet A in galaxy X to planet B in galaxy Y. Planets A & B, as well as galaxies X & Y, all provide us with equally valid frames of reference, and roughly equivalent ones to boot (for the sake of argument, if nothing else). In the darkness between the galaxies, our spacecraft begins accelerating towards c to such a point that γ starts to appreciably show itself in the forces experienced by the passengers relative to their actual acceleration with respect to A, B, X, and Y (let's just lump them all together as coordinate system Z). For the sake of argument, let's say that the spacecraft is travelling at 0.5c, half the speed of light. Here's my question: suppose the spacecraft then separates into two pieces, P and Q. Part P then begins to accelerate further, while Q stays at the same velocity - i.e. Q does not accelerate. After the spacecraft separates into parts P & Q, these two parts are travelling at exactly the same velocity with respect to each other, so with respect to Q, P is not moving and so has a velocity of zero. But with respect to Z, P is moving at 0.5c, which is substantially more than zero! Time to plug and chug! Forgive the MS Paint equations, but I don't feel like busting out the tablet right now. Let's just solve for γ to see how acceleration is affected:
So at .5c, γ=1.155, so F=1.155ma; we can juggle this to yield a=.866F/m. The point of the this is that, for a given force F exerted by P (casting off mass to change velocity), P will experience about 13% less acceleration with respect to Z than it will with respect to Q! Now, this is only initially, as P's mass will change, as will its velocity as it continues to accelerate - but for small masses lost and small accelerations, v won't change by much with respect to c and m won't change much at all, which means we should be able to see a much different a for the same F, depending on which frame of reference we use.

Now, this is weird, because one or the other frames of reference should be "chosen" in that P will experience some acceleration (a), and that acceleration will bear a relationship to the force (F) exerted, so we can calculate which frame of reference is being preferred - but relativity states that there is nothing special about any particular frame of reference. I'm probably fucking up somewhere, likely having to do with a Lorentz contraction that I'm not taking into account (because Lorentz basically assumed relativity to determine his γ, I suspect this may be relevant): P and Q will be experiencing Lorentz contractions which may cause the discrepancies to disappear under actual observation, in which case I am erroneously assuming some absolute frame of reference by not taking all the relevant relativistic effects into account. Still, that's just my best guess for where I may be wrong if I am, and I'm not quite sure about that.

OK, this is getting long, so I'm cutting it in two. In about three hours, I'll be starting a work week that has me going pretty much around the clock, so I probably won't be able to finish my bit on the happiness machine until next weekend. I haven't forgotten, though - that research supports the whole thrust of my argument, so I hope I get it soon!

Saturday, March 21, 2009

101 Interesting Things, part eleven: The Lorentz Factor

Like any young physicist, I spent my high-school years desperately attempting to break the very laws I was studying: perpetual motion machines, FTL drives, doomsday devices that swallowed universes, cold fusion reactors, the works. Because I failed to break any of the laws I learned, I imagine this would be like a DEA agent trying desperately to score some pot, but being totally unable to do so.  Somehow.  I don't know, maybe he was wearing his badge or something.  Every scientific principle I learned was either a catapult to becoming a Nobel laureate, or the roadblock I would inevitably hear about from my physics teachers later that day.

One of the earliest of these endeavors involved faster-than-light (FTL) travel.  You see, back in the Pleistocene, before the internet had infected everyone's house and I had to go halfway across town to an actual library to look something up (holy crap, do I feel old), I would take my designs to the physics department and ask the teachers why they wouldn't work.  I actually succeeded in stumping a few of them once or twice, but I would inevitably be shot down by some bit of trivia like Lenz's law (which is basically electromagnetic friction).  Right after we learned about vectors, we jumped into the application of actual forces (which was followed throughout the semester by figuring out those forces, roughly in order of ascending complexity and/or descending magnitude), where I learned the mighty F=ma.  For the uninitiated, or those who simply don't remember high school science (don't feel bad, I don't remember anything else), F is force, m is mass, and a is acceleration.

What this means is that if you know the mass of an object, as well as its current acceleration, you can calculate the force acting upon it.  Conversely, if you have some mass and want it to accelerate at a certain rate, then you can calculate the force that must be applied to do so.  Actually, as long as you know any two of those terms, you can calculate what the third one must be.  I reasoned that as long as we kept applying force, we could accelerate to any velocity we could choose.

So one day I asked my teacher why we couldn't go faster than the speed of light.  I was told that as we accelerated towards the speed of light, the forces involved in our acceleration would crush us.  Obviously, the more a mass accelerates, the greater the force acting upon it - but what if we accelerated really, really slowly?  It would take some time, but it should work, shouldn't it?  And, to jump ahead just a bit, subjective time "slows down" as one approaches the speed of light, so shouldn't the passengers experience the trip as a relatively quick one?

There were two things that I hadn't taken into account.  First, that modern spacecraft accelerate by casting off mass:  by throwing something in the opposite direction that you want to go, you move yourself in the direction you want to go, because for every action there is an equal and opposite reaction.  Spacecraft don't need fuel for a trip like cars do, because cars need fuel to travel a certain distance, but spacecraft only need the fuel required to accelerate to the desired travel speed; then they wait, and then maybe decelerate when they're about to arrive.  But the speed of light is really, really fast, and accelerating anything to such a speed would require a lot of mass to be cast off, which would in turn require a very massive craft which would in turn require a lot of fuel to take off in the first place.  I think you can see where this is going.

But that was merely a technical limitation, a problem for the engineers to work out.  The second thing I had not taken into account was the Lorentz factor, which actually makes accelerating to the speed of light impossible - for any living thing, anyway.  You see, as it turns out, F=ma isn't the whole story.  The whole story is F=γma, where γ (gamma) represents the Lorentz factor.  γ itself is defined as the inverse of the square root of the quantity one minus the square of the ratio of velocity to the speed of light.  Huh?  Yeah, I'm not very good with words, either.  Here it is mathematically (v is velocity, c is the speed of light):
OK, so there are a couple of important things with this new insight.  The above is γ, and since F=γma, we can do some juggling to yield γ=F/(ma).  Since F is precisely equal to m*a in most Earthly applications, it seems reasonable to assume that γ=1 most of the time.  Fortunately, this is true!  Looking at the above equation, v/c is really tiny when velocity isn't close to the speed of light, because c is huge and v is not huge.  This means that (v/c)2 is going to be even tinier, so that 1-(v/c)2 is going to be really close to 1 (but just a tiny bit more).  The square root of this will also be close to 1, and 1 divided by something really close to 1 is also really, really, really close to 1.  This means that γ doesn't make much of a difference in our day-to-day lives, because we don't go anywhere near the speed of light.

But what if we did?  If v was equal to c, then v/c would equal 1, and 1-12 is zero, and... OK, now we're dividing by zero, which isn't allowed.  OK, let's say that v is really close to c instead.  With v being close to c (but not equal to it), v/c is going to be close to one, but just a little bit less.  Squaring something close to one will make it only a little bit less than one than it is right now, and so 1-(v/c)2 is going to be a tiny, tiny number.  The square root of a tiny, tiny number (that is less than one but ever so slightly more than zero) is also a tiny, tiny number - it's only a bit bigger.  And one divided by a tiny, tiny number is a really, really big number.

Moral of the story:  as v approaches c, γ becomes huge.  And when γ is huge enough, even a tiny m & a will make for a huge F.  What this means for our travelers is that, as they cast off a little bit of mass at a time, for them to accelerate to the speed of light is going to require increasing amounts of force.  Furthermore, the opposite side of that coin is that the travelers will also experience this force, and tiny bits of acceleration will result in huge amounts of force, to the point where just the margins of error in even our most precisely controlled accelerations could be deadly.  So, no, we mortals will probably never get appreciably close to the speed of light.

Friday, March 20, 2009

101 Interesting Things, part ten: Other Moons, Other Worlds

I'm waiting on a response to an e-mail before I can finish my follow-up to my last post, I'm trying to track down a specific research study.  It's about the effects of heroin on chimpanzees.  Basically, a mother chimpanzee was put into a room with her child on the other side of a glass partition.  The mother had two buttons:  one of which would dispense a meal-sized portion of food to her child, the other of which would dispense a small amount of heroin into the mother's system.  Here's the catch:  she could only press one of the two buttons at five-minute intervals (or something like that).  I'm guessing you can tell where I'm going with this.

At any rate, if anyone's heard of the study, or has any information that could help me track it down (my Google-fu is weak here), I would greatly appreciate it.  In the meantime, I should probably post some more content, eh?

Which brings me to the next installment in 101 Intersting Things.  I want to mention up front that nearly all the following information (with the exception of some details on Enceladus, and a source for ocean-floor vents as a possibility for the ultimate origin of Earthly life) can be gotten from the video series below - it's actually really well-done, so if you've got an hour to watch it, you should do that.  If you have less time than that, though, then this post is for you!

Neptune's Triton is at -400° F (a mere 33° K), making it the coldest place in the Solar System (aside from empty space in the shadow of a planet, of course).  But Triton's internal heat is enough to melt ice into water, which creates ice volcanoes and nitrogen geysers.  The water volcanoes are neat enough - the environment is so extremely cold that water is able to perform the geological functions of magma - but the nitrogen geysers are truly amazing.  Ten times faster than Earthly geysers, Triton's geysers can create plumes taller than Mount Everest!  Next door, Uranus' Miranda is home to cliffs nearly eight miles high, or eight times the depth of Grand Canyon.

Jupiter's Io is, quite simply, Hell - fire Hell and ice Hell.  Sporting three-hundred some-odd volcanoes, Io produces enough lava to cover every continent on Earth every year.  How?  Jupiter's tidal forces literally tear it apart on the inside, and all that grinding and shifting creates the internal heat to fuel such spectacular vulcanism.  But away from the continuously flowing lava, temperatures drop precipitously, resulting in multicolored sulfurous snow.  The temperature shear where these extremes meet sends plumes of gas up to 250 miles into space.  Even if you managed to get a space-suit that could stand both temperature extremes, you'd still need to worry about the constant radiation bath from Jupiter - radiation strong enough to turn the SO2 gas vented by the volcanoes into giant neon lights, an aurora to rival our own on Earth.  Europa, an ice moon with a liquid interior (also brought to you by Jupiter's tidal forces), is probably the most likely place for us to find extraterrestrial life.  The same tidal forces that liquefy Europa's core also create massive fissures in the surface, some over a thousand miles long, which can propagate at up to three miles an hour - about human walking speed.  The inside of Europa is the largest ocean we know of, more voluminous than all Earthly oceans combined, and so represents one of our most likely shots for finding life off of Earth.  After all, recent research suggests that life on Earth may have started at the bottom of the ocean, clustered around hydrothermic vents, and we have also found living bacteria in ice 20° F below zero.

Phobos, one of Mars' two moons, would be a likely waystation between Earth and the Red Planet - a natural space station for at least the next 11 million years.  It's only about the size of Houston, and shaped vaguely like a potato - it's so irregularly shaped and so close to Mars, that surface gravities vary over 450%.  Humans on Phobos would weigh about as much as mice on Earth, and so could perform some pretty extreme stunts:  pitchers could throw baseballs into orbit, power-lifters could move masses equivalent to a fully-loaded jumbo jet, a gymnast could perform 3,000 revolutions during a 25-minute jump, and a high-jumper could achieve escape velocity if he wasn't too careful (or jump into orbit around Mars, if he was extremely careful).  But be careful not to kick up the three-foot dust layer when you land!  And don't forget to visit the Stickney crater, a seven-mile pock-mark that nearly destroyed Phobos in its formation.

Our moon is also discussed, covering much of the information I discussed in my post on Theia.  It's also incredibly fun to watch astronauts at play, giddy with excitement on the moon.  Gene Cernan himself also makes an appearance for the program, which is pretty damn cool all by itself.

Enceladus shown "next to" Titan - Titan is actually some 1.2 million km (746,000 mi) farther than its vastly smaller sibling satellite. (source)

Tiny Enceladus, with a surface area of only 800,00 km2 (around 309,000 miles2, or 15% larger than Texas), is Saturn's second-smallest spherical satellite - all satellites smaller than it, with the lone exception of Mimas, are irregular in shape.  Also home to ice volcanoes, Enceladus is probably the best site for snowboarding within 7 billion miles of us.  The largest of Saturn's moons, Titan, is home to lakes of liquid natural gas. At one-seventh the gravity of Earth, humans could swim like dolphins, leaping bodily out of the water.  Titan is the second-largest moon in the solar system, out-massed only by Jupiter's Ganymede, and larger than the planet Mercury.  It is massive enough to sport an oxygen-free atmosphere so dense that humans could fly through it aided only by strap-on wings - just like in the cartoons.


So check it out!  Enjoy!

Saturday, March 14, 2009

Arguing on the Internet: The Happiness Machine (part 1)

I recently read a post over on Daylight Atheism concerning the thought experiment of the happiness machine. I dig thought experiments, especially ones that primarily concern ethics, and the happiness machine is no exception. I do, however, have a few things to say about Ebonmuse's reasoning. A good deal of this has been covered in the comments, but there are a couple key points that I did not see raised in my cursory read, so I want to go over those today.

First, I want to say that I don't disagree with anything he says - so far as his argument goes, I'm more or less with him. However, there are some issues that he doesn't address either in the main post or in his responses to comments; this is the scope I intend to establish for this particular post. I'll be digressing a bit at the end for a lead-up to the following post, which will concern what empirical observation tells us when juxtaposed with our intuitions, and what that means for our "moral sense."

My only real beef, from which all of the following proceeds, is that thought experiments have an introspective diagnostic quality to them, which seems to have been ignored. Because a thought experiment is carried out by your mental apparatus, with your mental apparatus, on your mental apparatus, the "experiment" has nothing to do with the world itself, and everything to do with your mental apparatus. Thought experiments don't tell us things about the world, they help us to illuminate our own thought processes - provided that two key conditions are met:
  1. The subject is honest.
  2. The subject takes the thought experiment seriously.
Of course, there is an exception: when the thought experiment is done on multiple people, it can actually tell you about the world (specifically, about the people in it). But one person doing a thought experiment is really just introspecting fancily.

As one commenter (called "penn") points out, the happiness machine isn't a real machine in this world, but rather a perfect hypothetical machine in a hypothetical world which only differs from ours in that said machine exists. The question (as clarified by penn) is not, "Would you use a happiness machine if I offered you one," but rather, "Would you accept a life of unvaried but certain bliss, at the expense of your current life of varied but uncertain experience?" That is the crucial bit of the thought experiment, and that is what I would wager most people find repugnant about the happiness machine. People tend to crave variety and meaning in their lives, not mere happiness. To paraphrase Roger Williams, author of The Metamorphosis of Prime Intellect, a novella which addresses (in part) just such a happiness machine and its philosophical ramifications, it's not about how many hedons you've accumulated at the end, it's about whether you have accomplished something (source).

Ebonmuse's answer to penn's question concludes with, "Sure, maybe using the Happiness Machine is the best course of action in that world. But I think that world is so utterly and completely different from our own that we shouldn't have any confidence that conclusions that are drawn there can be ported over to our world." This misses the point of the question, to my mind. The point is not whether the thought experiment, as laid out, is possible to carry out in (or carry over to) our world; the point is to draw out just how committed the subject of the thought experiment is to certain principles when those principles are pitted against each other. In other words, the question, "Would you accept a life of unvaried but certain bliss," is one member of a set of questions meant to determine the answer to the "larger" question, "Would you maximize happiness at the expense of everything else (except the necessary preconditions for life itself)?"

Some might say yes. But I imagine that a lot of people would say no, or would try to qualify a "yes" with things that they think might give their life more meaning than simply being locked in a vat hooked up to a brain-poker (basically, for the same reasons that people like me turn down heroin: I'm afraid that it's going to be too good). For instance, some might say they'd do it if it were more like being hooked up to the Matrix, with simulated experiences providing variable "background noise" to the unchanging state of contentment. Others might insist that they'd want it to be a neural implant, a pleasure-dispensing chip so that they can go through real life "unimpeded." But it doesn't matter how much we change the world to make the happiness machine possible if we're not altering the architecture of our own brains.

The problem is, the above qualifiers do not change the fundamental part which has been shown experimentally to override all other concerns: the brain's reward pathways, once tapped in such a way, simply drown out everything else. It doesn't matter if it's a chip, a program, or a hermetically sealed pod - experiencing unconditional pleasure breaks brains, plain and simple. It doesn't matter whether you're able in principle to move around and interact with the world or not, because once you're wired in, all other motivations will be drowned out except those that most efficiently keep the good times rolling. And when the happiness machine makes it so that you don't need to do anything to maintain the perfect high, well, in all likelihood you simply wouldn't do anything.

This is getting a bit lengthy, so I'm going to stop here for the moment and pick this up later. The point, so far, has been more or less that to most people, "happiness" in and of itself is not really all they want in the world - or so they think. In other words, most people think that they have more standards of value than the mere feeling of happiness (while research shows that if happiness is supplied in unlimited quantity, all other concerns take a back seat). Part of what's making this take so long (for me) is that I'm poking around the internet to find research on that "or so they think" part (I need to back up that last paragraph, after all), and it's leading me down some rabbit holes. I'll wrap this up in the next post, concerning the aforementioned research as well as what the thought experiment says about our concept of morality.

Thursday, March 12, 2009

101 Interesting Things, part nine: Agricultural Insects

Ecclesiastes 1:9 states, "What has been will be again, what has been done will be done again; there is nothing new under the Sun."  Ununoctium notwithstanding, this holds true in a very important sense for just about any bit of human existence you could care to point at.  By and large, our customs and beliefs are handed to us from our predecessors, who in turn got them from their predecessors, and so on.  Though there has been bitter competition in the meme pool, there is a fairly good record to track, in the form of the historical record, when it comes to finding out where ideas come from - not all the way, of course, but a good ways back in a lot of cases.  Tracking the history and development of ideas (or "memetic pedigree," if you will) can be a very interesting enterprise.

One may experience similar fascination by seeing a human invention anticipated by nature.  The taming of fire, so far as I know, is the only respect in which humanity stands out from the rest of the animals (and all the developments obtained thereby, such as metal, and all the neat things we need metal to do, like build particle accelerators).  Two other biggies are the wheel and the domestication of other species.  But the wheel finds its answer in the bacterial flagellar motor, which anchors the rotating flagellum to the cell wall in a housing that incorporates a free-spinning disc.  As for domestication, that is what this post is about.

Our main example for today is the leaf cutter ant, which has domesticated a fungus.  As Richard Dawkins writes in The Ancestor's Tale:
A single nest of leaf cutter ants, Atta, can exceed the population of Greater London.  It is a complicated underground chamber, up to 6 metres deep and 20 metres in circumference, surmounted by a somewhat smaller dome above ground.  This huge ant city, divided into hundreds or even thousands of separate chambers connected by networks of tunnels, is sustained ultimately by leaves cut into manageable pieces and carried home by workers in broad, rustling rivers of green.  But the leaves are not eaten directly, either by the ants themselves (though they do suck some of the sap) or by the larvae.  Instead they are painstakingly mulched as compost for underground fungus gardens.
The ants treat their fungus crops just the same as we treat our own garden veggies:  planting them, tending them, keeping them free from pests, and finally eating them.  Termites do largely the same thing, as it turns out - we even run up & steal their stuff 'cuz it's tasty.  And some ants go one farther, herding aphids like cattle.  So much for domestication.

So it looks like we're down to fire as our last defining difference.  Once some other species gets a hold of that, it's a hop, skip, and a jump to smelting alloys and making large hadron colliders.  Geologically speaking, of course.  I mean, we got from there to here in a measly million years or two.

Tuesday, March 10, 2009

101 Interesting Things, part eight: Gabriel's Horn

Gabriel's Horn is one of those interesting mathematical paradoxes which is fun to think about, but really doesn't mean anything in the "real world," kind of like Zeno's paradoxes.  No matter how much you talk about Achilles never passing a tortoise, Achilles would in fact pass the tortoise if such a race were ever run.  This is why it's funny to define "zenophobia" as "the fear of convergent sums."

Gabriel's Horn is formed simply by graphing out y=1/x, for x>1 (because you can't divide by zero, and this removes some asymptotic nonsense).  The idea is that you rotate the curve about the x axis to make a shape looking like one of those old-timey horns, with the mouthpiece being at the end of infinity.  Observe!
What's interesting about this particular shape, though, is that the volume of the shape as you go out along the x axis is always decreasing but never gets to zero, and the sum of all the volume converges on π.  In other words, the cumulative volume of the horn as you approach the mouthpiece is always getting closer to π ("π units of volume," to be precise), but never quite reaches it.  If you added it up all the way to infinity, it would reach π units of volume (pro tip:  this is impossible).

OK, so what's so interesting about this?  Well, if you checked out the Wikipedia page linked above, then you might have seen the painter's paradox.  Because the curve goes on to infinity but the volume it expresses is convergent on π, you could fill such an object with π volume of paint - gallons, liters, whatever system of measure you're working with (I don't feel like doing math right now, just talking about it).  However, if you were to try to paint the outside of the horn, it would take an infinite amount of paint.  But... wait a minute:  it has finite volume, but infinite surface area?  Yep, that's the paradox.

Of course, such an object could never actually be constructed, let alone painted.  Actual horns are made up of matter, which has mass and takes up space, so once you get to the point where the horn needs to be less than a hydrogen atom wide, you're kind of boned because there's nothing for you to build it with.  Even if you found Euclidean horn-stuff, which I just invented (it isn't made up of discrete components and is infinitely divisible), the constraints of real-world paint present you with a distracting pseudo-solution to the paradox.  Paint is made of molecules, and no matter how infinitesimally small the horn gets, you'll always need a minimum number of molecules to "cover" the horn.  Because the horn goes on forever, the cumulative volume of paint required to cover the outside to a length x diverges to infinity as x approaches infinity.  Filling the inside with real paint would still require a finite amount because eventually, the horn would get so narrow that not even one molecule of the paint could squeeze in (even if we ignore surface tension), and you could never actually fill it all the way to infinity.  But this is irrelevant to the point that the volume of the horn is a convergent sum while its surface area diverges - even with Euclidean paint, the inside of the horn would still be filled by a finite 3D volume, though the 2D surface area is still infinite.

Ultimately, the paradox arises because it is counterintuitive to most people that something could be finite in terms of volume, but infinite in terms of surface area.  By ignoring the limitations of the real world, we can show with math that this is actually possible (possible to express without contradiction, that is), but it still grates against our intuitions and causes lots of people to scratch their heads and go, "What?"  Neat trick, isn't it?

Thursday, March 5, 2009

A Review in Brief: "The Ancestor's Tale: A Pilgrimage to the Dawn of Evolution"

I just finished reading The Ancestor's Tale:  A Pilgrimage to the Dawn of Evolution, by Richard Dawkins.  It's a fantastic read, and I highly recommend it to anyone who is interested in just about anything.

Two features stand out most about this book, even moreso when taken together:  its breadth, and its accessibility.  Throughout the book, Dawkins relates a multitude of facts about various organisms, as well as scientific principles in general and discoveries in specific.  All of this is done in layman's terms, making the book a very easy read.  I read most of it either during fifteen-mintue breaks at work, or while undergoing plasmapheresis at the local plasma clinic.  While not specifically saying so, The Ancestor's Tale touches on a wide variety of scientific disciplines, all of which lend something of value to the confirmation of the theory of evolution.

Biology, obviously, is steeped in evolutionary theory.  Evolution is the basis of modern biology, after all, so it is unsurprising that there should be discussions of molecular genetics, taxonomy, ethology, and population genetics.  But in discussing fossils, Dawkins also brings in bits from geology; bits of physics lend to his discussion of radiometric dating; chemistry is brought to bear in his explanation of RNA transcription and protein synthesis; statistical analysis plays a part in his relation of how molecular data is arranged and analyzed to determine degrees of relatedness between organisms and species; even general philosophy makes a few appearances, most notably (to me, anyway) in the Orangutan's tale, which includes a lesson in parsimony as applied to the attempt to determine how many trips were taken by ancestral apes between Africa and Asia.

I confess that I find it slightly jarring to use the term "gripping" to describe a popular science text, but this book is a real page-turner!  Not only has The Ancestor's Tale provided a great deal of fuel for my 101 Interesting Things series, it's also enhanced my appreciation for the power of the scientific method and the wealth of opportunity available to us in the information age.  This book contains more information in it than Aristotle probably had access to during his entire life.  All in all, I think Dawkins has  rightly earned his place as Professor for Public Understanding of Science with this book alone.  If you ever have the time and the opportunity, I strongly urge you to work up the inclination to read this book.