Saturday, July 21, 2012

On Cognition

Last spring, I took a class in the Interdisciplinary department on cognitive science - we covered philosophy of mind, computer science, and neuroscience, and it was awesome.  I gained a reputation in the class for hedging my answers very carefully, admitting when I didn't know things, but knowing quite a lot, and refusing to engage in wild speculation (unless I was explicitly asked to do so).  I was also the only philosophy major in the class, and I think I did a good job of representing the department rather well.  But then again, I also had at least five years on each of the other dozen or so students, and I had spent those years educating myself and arguing on the internet in my free time, so you could say I was a bit over-prepared.

The prompt for our final was, "What is the nature of human cognition?" and we were given six double-spaced pages to answer.  I think I did a rad job, and the material I brought together (almost all of which had been covered in class) was tremendously interesting, so I thought I'd share it with you, Dear Reader.  Enjoy!

Is Consciousness This Way or That Way?  A Robust Yes and No.

What is the nature of human cognition?  At the very least, we may say that it is complicated.  My own personal conjecture is that the first-person nature of consciousness is essentially a “story” that the brain “tells” to itself, after a fashion; but we humans have been using our minds to ponder our minds for millennia, and I probably stand very little chance of successfully advancing a new theory of mind in a six-page paper for an introductory course in cognitive science.  Three broad avenues of approach have been quite productive, however:  studying brains, attempting to manufacture consciousness “from scratch” (as it were), and inquiring as to what precisely we might mean when we talk about our minds.  Respectively, these may be called the empirical approach, the engineering approach, and the philosophical approach.  Each of these approaches has something to say on the three issues of brain brittleness, mental models, and the “directionality” of consciousness.  What I intend to show is that while each of the various answers proposed to the question, “What is consciousness?” contains at least a kernel of truth, no present account delivers the whole story and so we ought not to endorse one or the other as “essentially right.”

Brittleness is the antonym of flexibility, and by cognitive brittleness I mean a loose and fuzzy sort of “mismatch” between the context in which a mental operation would function well and the context in which it actually occurs and operates.  In other words, a brain is brittle if its normal operations can lead to strange results in any class of problem cases (such that an isolated mistake does not constitute brittleness).  On this analysis, change-blindness is a species of brittleness.  Change-blindness as shown in humans is twofold:  while attending upon specific features in the visual field, we are blinded to what would otherwise be clear and obvious changes in that visual field; and while attending to a whole scene, we are blinded to sufficiently slow and gradual changes.  It may be objected that failures in human attention are not failures of cognition in general, but there are more examples.  Male turkeys will initiate their sexual response when presented with a head on a stick; male hawks in captivity have their semen collected by humans wearing certain hats (the hat is the receptacle); and male humans will deliberately and repeatedly initiate sexual response after seeking out mere images of members of the attracting sex, and not members of the sex themselves.  None of these behaviors in any way contributes to the individual’s inclusive fitness, and such behaviors can in fact detract from the individual’s chances at projecting its genes into the next generation.  How could such regressive behaviors have evolved?  The obvious answer is that they are accidents of imperfect machinery:  minds, as Hume argued, are more akin to association machines than to rationality machines.

Visual processing experiments also illuminate the shortcomings of the human perceptual apparatus.  When presented with angles (with or without lines, but without any closed figures) for a time interval too brief to consciously register, but lengthy enough to register somewhat in the brain, subjects report not seeing anything.  But when a field of angles with lines is presented with at least one closed figure for the same time interval, subjects report seeing a triangle.  It is as if the ocular receptors are a committee, and some say, “We’ve got angles!” while others say, “We’ve got closure!” and still others (or perhaps another area of the brain) put them together and say, “There must be a triangle about!”  Clearly, the empirical approach reveals that there must be something funny going on beneath the surface, for we know that our brains confabulate to give a constant and continuous picture of the world – indeed, they must, for it is the only way to get such an impression when probing the details makes things go all squiffy.

Source:  “Features and Objects in Visual Processing,” Anne Treisman, 1986.

What is it, then?  If our brains are not “perfect awareness” machines, what is it that characterizes all the minding which our brains so dutifully perform for us day in and day out?  Here the empirical approach must merge with the engineering approach, and both may be inspired and evaluated in turn from the philosophical approach.  I would say that the mind may be most fairly and robustly characterized as a dynamical system (ctrl+F for “cognition as a dynamical system” to get to the interesting part - D):  after all, our brains evolved from primitive brains, and we can see a rather fine-grained spectrum of “braininess” in nature.  Some of the most primitive of these (e.g. earthworms) are of dubious braininess in the first place, but a great many (e.g. wasps) display prominently the characteristics that we expect dynamical systems to have.  A broad swathe of human behavior could fairly be so categorized (basic instinctual responses such as hunger, easily manipulable operations such as conditioning, and so on) – yet this does not tell the whole story.  While there are clear and obvious elements of our existence that directly and unambiguously match elements of dynamical systems, if human cognition is fundamentally a dynamical system, then it is one that has connectionist networks enmeshed within it.  Connectionist networks are the basis of the pattern recognition that our species does all too well (pareidolia, the extraction of a pseudo-signal from pure noise, could be seen as connectionist brittleness), and can also easily explain feedback loops such as successive approximation in learning.  The engineering approach shines here, for the GNNV (I call it “Genevieve”) is a proof-of-concept that connectionist networks – i.e. networks where the interesting operations have to do with connections between processing nodes, not the individual nodes themselves – can in fact learn.  The highly-qualified sense in which Genevieve learns is a philosophical matter, but what is important is that it is learning, if only after a fashion.

But now that we see that dynamical systems may subsume multiple connectionist networks, and connectionist networks are the sort of thing that can recognize patterns – may we take another step further?  Might another kind of behavior emerge from the underlying parts?  I say yes, and it is precisely this next step which I believe is responsible for the confusion that brought us to initially regard cognition as computational.  A computation is, after all, a sort of pattern; once we may identify patterns, and think of patterns in the abstract, it is easy to see that the simple computation “add two” is a pattern when applied to more than one thing.  It is our pattern recognition abilities, I think, that cause us to be able to compute; it is not the case, I maintain, that our ability to compute reveals an essentially computational nature of our minds.  While I cannot establish this with experimental certainty (…yet), I can argue that it is eminently plausible because it explains both the fact that humans seem to learn mathematics best by learning patterns between numbers first, and the fact that humans err so frequently and egregiously when performing such computations (especially computations of disjunctive reasoning).  Another way of stating the claim would be to say that the observation that computations must be learned by humans is explained by the fact that computation is not the nature of our mental processes.  So while we certainly “think/behave” as computers whenever we do in fact compute, this in no way shows that computation is in any way a part of us – no more than the clothing we wear, at any rate.

This is further borne out in the research on decision-making.  The best recent research strongly suggests that decisions are handled almost entirely at a pre-cognitive level, while the logical justifications we provide for our decisions (the rational “why” according to “us”) are an after-the-fact ad hoc confabulation.  Bizarre self-reports of neurological dysfunctions, such as alien hand syndrome, clearly show that there are conditions under which the subject’s learned inferences simply can’t be rigorously applied by the subject to the context at hand.

The next question comes naturally enough:  if the mind is brittle, and this brittleness proceeds from its ad hoc nature as a dynamical system subsuming many interconnected connectionist networks, then does the mind control the body, or does the body determine the mind?  This is one of those philosophical questions where what exactly we mean by each term will directly affect the outcome in a clear and obvious way that can make the question seem silly.  It is still a substantial question, as we shall see; it is simply not the profound sort of question we may have thought it to be at first blush.  First, the mind simply does, in some form or another, control the body:  when I play League of Legends (to use the first preposterously easy example to pop in my head), I deliberately pause after deaths in what is otherwise a fairly mindless click-fest to analyze what killed me, and I then adapt my strategy to counter that particular cause of death.  To borrow some computational terms (while limiting their scope to a language of convenience for the sake of this particular argument), I deliberately maintain a sort of “monitor program” in the front of my mind to identify when things have gone wrong (i.e. “I died”), identify a cause (did I take mostly physical or magical damage?  Did I charge in like an idiot, or did my idiot team fail to back me up?), and then alter my behavior as a result of this mindful processing (get more armor/magic resistance, or encourage better teamwork in chat).  My mindful activities here supervene on my bodily operations, altering them on the fly to pursue a different (and hopefully more successful) course of action.

But as we have discussed, this inner monologue is a confabulation, a story I tell myself for the sake of believing that I am rational; what is more likely to be “genuinely” the case is that I have associated certain pre-death factors with explicit causes, associated those causes with specific remedies, and I act out of habit when I stop to take stock because I have associated some degree of short-term thoughtfulness with long-term success.  The bits of my meat then do all the necessary processing in parallel, and I construct a convenient fiction to give the impression that “I” (whatever that means) was at the helm all along – but any success on this analysis is not due to me, but rather to the sum of the experiences I have had, since different experiences would lead to different associations and thus different courses of action.  This is already a bottom-up model, although I could construct a more thoroughly bottom-up model for the same example with some difficulty.  The point, at any rate, is that we clearly are bottom-up in other respects:  the onset of puberty and all the behavioral modifications that introduces should be sufficient evidence to anyone who disputes the point.  The reflexive mind arises from the body’s construction, but there is a feedback loop at work and so we may deliberate – and while the neural processes behind such deliberation may be hidden and weird and antecedent to our conscious afterthoughts, this does not change the fact that deliberation occurs in the first place.  And what is deliberation, if not the mind deciding how to control the body?  The question now is not whether we are top-down or bottom-up, but to what extent we are each.

So we see that cognition is top-down in some ways, bottom-up in others.  Which counts as the “essential” feature will depend on context (emphasis, perspective, whether grant money is at stake, etc.), and can even depend on how we analyze a situation within one and the same context.  While the sometimes-top-down/sometimes-bottom-up mind does perform computations after a fashion, it is plausible that these arise only as particularly rigid patterns recognized by our robust connectionist pattern-recognition “software.”  These connectionist networks themselves are subsumed in what may be seen as a giant and complex dynamical system, albeit one so giant and complex as to be virtually unrecognizable next to Rodney Brooks’ insectobots (seriously, if you don’t know who that guy is, click that link - D).  The ad hoc and convoluted construction of the mind can reveal its seams to the right probes, and thereby assist us in our analysis of it; but the story we find here is far from complete, and far from simple.  Indeed, to quibble over which is the “essential” feature of consciousness, or the “best” way of conceiving of the mind, is tantamount to arguing over what is the “true” length of a coastline.  The problem is that coastlines have interesting features at all levels of detail, and these change across all timescales.  There definitely is a coastline, there’s just no One True Privileged Way of measuring it.  Coastlines, like minds, are complicated – and to understand this complexity, it is necessary to be able to view it from many angles.  This robust understanding brings with it a multiplicity of perspectives, however, and which perspective is “best” in a given way will shift with context.

And since I have room for an oversimplified concrete answer, I shall close by echoing Richard Dawkins’ suspicion from The Extended Phenotype.  Dawkins conjectures that the most primitive brains (qua “collections of nerve cells of sufficient size,” never mind what “sufficient” is supposed to mean) model the world in the most necessary rudiments:  food, obstacle, etc.  As brains grow more sophisticated, so too do their models, until they model other organisms (e.g. “Can I kick this guy’s ass for a mate?”  “Will he stab me if I invade his territory?”).  And at the highest levels of complexity that we have observed, a brain’s model of the world will become so complete that it models itself.  Dawkins sees this as the origin of first-person awareness; I think it’s a definition of self-awareness.

For further reading, check out "I Am John's Brain."

No comments: