33

This is the second part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

Chapter 2: Evolution and the Fragmented Brain. Braitenberg's Vehicles are thought experiments that use Matchbox car-like vehicles. A simple one might have a sensor that made the car drive away from heat. A more complex one has four sensors: one for light, one for temperature, one for organic material, and one for oxygen. This can already cause some complex behaviors: ”It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns toward them and destroys them.” Adding simple modules specialized for different tasks, such as avoiding high temperatures, can make the overall behavior increasingly complex as the modules' influences interact.

A ”module”, in the context of the book, is an information-processing mechanism specialized for some function. It's comparable to subroutine in a computer program, operating relatively independently of other parts of the code. There's a strong reason to believe that human brains are composed of a large number of modules, for specialization yields efficiency.

Consider a hammer or screwdriver. Both tools have very specific shapes, for they've been designed to manipulate objects of a certain shape in a specific way. If they were of a different shape, they'd work worse for the purpose they were intended for. Workers will do better if they have both hammers and screwdrivers in their toolbox, instead of one ”general” tool meant to perform both functions. Likewise, a toaster is specialized for toasting bread, with slots just large enough for the bread to fit in, but small enough to efficiently deliver the heat to both sides of the bread. You could toast bread with a butane torch, but it would be hard to toast it evenly – assuming you didn't just immolate the bread. The toaster ”assumes” many things about the problem it has to solve – the shape of the bread, the amount of time the toast needs to be heated, that the socket it's plugged into will deliver the right kind of power, and so on. You could use the toaster as a paperweight or a weapon, but not being specialized for those tasks, it would do poorly at it.

To the extent that there is a problem with regularities, an efficient solution to the problem will embody those regularities. This is true for both physical objects and computational ones. Microsoft Word is worse for writing code than a dedicated programming environment, which has all kinds of specialized tools for the task of writing, running and debugging code.

Computer scientists know that the way to write code is by breaking it down to smaller, more narrowly defined problems, which are then solved by their own subroutines. The more one can assume about the problem to be solved, such as the format it's represented in, the easier it is to write a subroutine for it.

The idea that specialization produces efficiency is uncontroversial in many fields. Spiders are born with specialized behavioral programs for building all kinds of different webs. Virginia opossums know how to play dead in order to make predators lose interest. Human hearts are specialized for pumping blood, while livers are specialized for filtering it; neither would do well at the opposite task. Nerve cells process information well, while fat cells store energy well. In economics, the principle of comparative advantage says it's better for a country to specialize in the products it's best at producing. Vision researchers have found many specialized components in human vision, such as ones tasked with detecting edges in the visual field at particular orientations.

The virtues of specialization are uncontroversial within cell physiology, animal physiology, animal behavior, human physiology, economics and computer science – but less so within psychology. Yet even for human behavior, evolution is always expected to select the best possible mechanism for doing the tasks the organism is faced with, and you get the best results with specialization.

A few words are in order about ”general-purpose” objects. Kurzban has been collecting various ”general-purpose” objects, with his current favorite being the bubble sheet given to students for their exams. At the top of the form is written ”General Purpose”.

I love this because it's 'general purpose' as long as your 'general' purpose is to record the answers of students on a multiple choice exam to be read by a special machine that generates a computer file of their answers and the number they answered correctly...

There also exist ”general purpose” cleansers, scanners, screwdrivers, calculators, filters, flour, prepaid credit cards, lenses, fertilizers, light bulbs... all of which have relatively narrow functions, though that doesn't mean they couldn't do a great deal. Google has a specific function – searching for text  – but it can do so on roughly the whole Internet.

People defending the view that the mind has general rather than specialized devices tend to focus on things like learning, and say things like ”The immune system ... contains a broad learning system ... An alternative would be to have specialized immune modules for different diseases...” But this confuses specialization for things with specialization for function. Even though the immune system is capable of learning, it is still specialized for defending the body against harmful pathogens. In AI, even a ”general-purpose” inference engine, capable of learning rules and regularities in statements of predicate logic, would still have a specialized function: finding patterns in statements that were presented to it in the form of sentences in predicate logic.

There are no general-function artifacts, organs, or circuits in the brain because the concept makes no sense. In the same way that if someone told you to manufacture a tool to ”do useful things,” or write a subroutine to ”do something useful with information,” you would have to narrow down the problem considerably before you got started. In the same way, natural selection can't build brains that ”learn stuff and compute useful information”. It is necessary to get considerably more specific.

Having established that the brain is likely composed of a number of modules, let's discuss a related issue: that any specialized computational mechanism – any module – may or may not be connected up to any other module.

Going back to Braitenberg's Vehicles, suppose a heat sensor tells the Vehicle to drive backwards, while a light sensor tells it to drive forwards. You could solve the issue by letting the sensors affect the wheels by a varying amount, depending on how close to something the Vehicle was. If the heat sensor said ”speed 2 backwards” and the light sensor said ”speed 5 forwards”, the Vehicle would go forward with speed 3 (five minus two). Alternatively, you could make a connection between the two sensors, so that whenever the light sensor was active, it would temporarily shut down the heat sensor. But then whenever you added a new sensory, you'd have to add connections to all the already existing ones, which would quickly get out of hand. Clearly, for complicated organisms, modules should only be directly connected if there's a clear need for it. For biological organisms, if there isn't a clear selection pressure to build a connection, then we shouldn't expect one to exist.

And not every module in humans seems to be connected with all the others, either. Yvain just recently gave us a list of many failures of introspection, one of which is discussed in the book: people shown four identical pairs of panty hose consistently chose the one all the way to the right. Asked for why they chose that one in particular, they gave explanations such as the color or texture of the panty hose, even though they were all identical.

The claim is that the unnatural separation in split-brain patients is exactly analogous to natural separations in normal brains. The modules explaining the decision have little or no access to the modules that generated the decision.

More fundamentally, if the brain consists of a large number of specialized modules, then information in any one of them might or might not be transmitted to any other module. This crucial insight is the origin of the claim that your brain can represent mutually inconsistent things at the same time. As long as information is ”walled off”, many, many contradictions can be maintained within one head.

Chapter 3: Who is ”I”?Cranium Command” is a former attraction in Walt Disney World. The premise is that inside each human brain is a command center, led by a Cranium Commando. In the attraction, you take the role of Buzzy, a Cranium Commando in the head of Bobby, a twelve-year-old boy. Buzzy is surrounded by large screens and readout displays. He gets information from various parts of the brain and different organs, represented by various characters. Buzzy sees and hears what Bobby sees and hears, as well as getting reports from all of Bobby's organs. In reponse, Buzzy gives various commands, and scripts the words that Bobby will speak.

Cranium Command does get some things right, in that it divides the brain into different functional parts. But this is obviously not how real brains work. For one, if they worked this way, it'd mean there was another tiny commando inside Buzzy's brain, and another inside that one, and so on. A part of a brain can't be a whole brain.

Buzzy is reminiscent of what Daniel Dennett calls the Cartesian Theater. It's the intuition that there's someone - a ”me” - inside the brain, watching what the eyes see and hearing what the ears hear. Although many people understand on one level that this is false, the intuition of a special observer keeps reasserting itself in various guises. As the philosopher Jerry Fodor writes: ”If... there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.

One intuition says that it is the conscious modules that are ”us”. The interpretations of the work of Benjamin Libet provide a good example of this. Libet measured the brain activity of his test subjects, and told them to perform a simple wrist movement at a moment of their choosing. Libet found that brain activity preceed the subjects' reports of their wish to move their wrist. These results, and their later replications, got a lot of publicity. Libet wrote, ”in the traditional view of conscious will and free will, one would expect conscious will to appear before, or at the onset, of [brain activity]”. A 2008 headline in Wired, discussing a study similar to Libet's, read: ”Brain Scanners Can See Your Decisions Before You Make Them.”

Now one might ask – why is this surprising? Consider the act of reading. While you read these words, several processes take place before the content of the text reaches your conscious awareness.

For example, you don't know how to identify the letters on the page; this job is done by ”low-level” modules, and you don't have any experience of how they work. You can think of vision as a modular cascade, with many different systems interacting with one another, building up the percept that is experienced. We have awareness of only the last step in this complex process. Most of the modules in vision are nonconscious, giving rise, eventually, to the conscious experience of seeing.

So, when you're going to move your hand, there are a number of modules involved, and some module has to make the initial decision in this cascade. It seems to me that there are really only two possibilites. One possibility is that the very first computation in the very first module that starts the string is one of the operations that's conscious. In this case, the conscious experience of the decision and the brain activity will be at the same time. The only other possibility is that in the long string of operations that occur, from the initiation of the decision to move the wrist to the eventual movement of the wrist, some operation other than the very first one is associated with consciousness.

Libet says that in ”the traditional view of conscious will”, conscious will would appear at the onset or before brain activity. But "before" is impossible. The module that's making the decision to move the wrist is a part of the brain, and it has to have some physical existence. There's just no way that the conscious decision could come before the brain activity.

Neither should it be surprising that our conscious decision comes after the initial brain activity. It would, in principle, be possible that the very first little module that initiated the decision-making process would be one of the few modules associated with conscious awareness. But if conscious modules are just one type of module among many, then there is nothing particularly surprising in the finding that a non-conscious module is the one inititating the process. Neither, for that matter, is it surprising that the first module to initiate the flick of the wrist doesn't happen to be one of the ones associated with vision, or with regulating our heartbeat. Why should it be?

So there are many modules in your brain, some of them conscious, some of them not. Many of the nonconscious ones are very important, processing information about the sensory world, making decisions about action, and so on.

If that's right, it seems funny to refer to any particular module or set of modules as more ”you” than any other set. Modules have functions, and they do their jobs, and they interact with other modules in your head. There's no Buzzy in there, no little brain running the show, just different bits with different roles to play.

What I take from this – and I know that not everyone will agree – is that talking about the ”self” is problematic. Which bits, which modules, get to be called ”me?” Why some but not others? Should we take the conscious ones to be special in some way? If so, why? [...]

There's no doubt that parts of your brain cause your muscles to move, including the very important muscles that push air out of your lungs past your vocal cords, lips, and tongue to make the noises that we call language. Some part of the brain does that. Sure.

But let's be clear. Whatever is doing that is some part of your brain, and it seems reasonable to ask if there's anything special about it. Those modules, the ones that make noises with your lungs, might be ”in charge” in some sense, but, then again, maybe they're not. It's easy to get stuck on the notion that we should think about these conscious systems as being special in some way. In the end, if it's true that your brain consists of many, many little modules with various functions, and if only a small number of them are conscious, then there might not be any particular reason to consider some of them to be ”you” or ”really you” or your ”self” or maybe anything else particularly special.

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 7:32 PM

Not having read the book from which this mini-sequence stems from, I raise here three points hoping they won't overlap with some future post.

The first one pertains to this quote:

As long as information is ”walled off”, many, many contradictions can be maintained within one head.

Strictly, this is not true. That is, having separated modules for different informations is surely a sufficient condition to make the brain be able to hold many contradictory informations, but it's not necessary: a trivial counter-example is a database holding in different lines different statements about some fact. A more poignant proof is the callosotomy, showing that two modules continue to exist even if there are no more connections between them. However, the presence of contradictory information by itself is evidence of the modularity only under the unlikely assumption that every module tries to achieve internal consistency.

The second one regards the connectivity of the whole "brain graph" (if the modules are mapped to vertices and accessibility relationships as edges): while a complete connectedness seems highly unlikely, it is appealing to think to the brain as a strongly connected graph, i.e. a graph in which there's a path from every node to every other nodes.

Third, we must not forget that these modules are a 'software' or 'cognitive' reduction of the brain. Evidence from neurofeedback, or simply the possibility to control blinking frequency, points to the creation/destruction of non-innate connection between separated modules. It would be fun if you could learn, through neurofeedback, to send false anticipatory brain activity for the wrist movement.

So there are many modules in your brain, some of them conscious, some of them not.

This to me seems plain wrong. I would say that none of them is conscious, otherwise you have just moved and fractioned the problem. But maybe I'm misinterpreting here, and you really meant "some of them produce consciousness and others don't".

But maybe I'm misinterpreting here, and you really meant "some of them produce consciousness and others don't".

Yes, that's what I meant.

So there are many modules in your brain, some of them conscious, some of them not.

This to me seems plain wrong. I would say that none of them is conscious, otherwise you have just moved and fractioned the problem. But maybe I'm misinterpreting here, and you really meant "some of them produce consciousness and others don't".

I assume by "conscious modules" Kaj Sotala means those modules whose activity one is conscious of.

I assume by "conscious modules" Kaj Sotala means those modules whose activity one is conscious of.

This formulation seems problematic also. If the brain is really so many agents (and I think there's no reason to think the contrary), there's no "one" who can be "conscious" of the activity of some module, unless consciousness is explained with "it's when this very special module access the activities of other modules". But then you have to explain why that special agent has consciousness and why others don't. You have just moved the problem. If consciousness has any hope of being explained through modularity, it (in my opinion) ought to be by deconstructing it into the shared activity of such and such modules, none of them being effectively describable as conscious.

If problematic, it points to a problem with the theory, rather than the formulation. Presuming wildly that your mental experience is similar to mine, then there is a very distinct notion of being conscious of some activities (performed by modules) and not others. I am, for example, quite conscious of writing this letter, but nearly oblivious to the beating of my heart. There is distinctly "one" that is "conscious" of this activity. Letting that go temporarily in order to better investigate some cognitive theory may be productive, but eventually you have to come back to it. Trying to explain it away via theory is like trying to find a theory of gravity that states dropped apples don't really hit the ground. It may be wonderfully constructed, but doesn't describe the world we exist in.

"Although many people understand on one level that this is false, the intuition of a special observer keeps reasserting itself in various guises. As the philosopher Jerry Fodor writes: ”If... there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.” "

I have built a business. It has customers, employees, office space, tax returns, vendors.... It has a separate name and existence from me. It operates according to compatible, but distinct, purposes, opportunities and choices to mine. And, if I accomplish my goal, it will operate even more independently in the future than it does now, and may at some point, exist entirely separate from me. But, I'm not tempted to attribute an "I" to it.

When a system achieves sufficient complexity, we have a tendency to reify it. I don't know what that bias is called.

Thanks for the article.

I have produced an offspring. It has friends, teachers, its own room, allowance, personal tastes.... It has a separate name and existence from me. It operates according to compatible, but distinct, purposes, opportunities and choices to mine. And, if I accomplish my goal, it will operate even more independently in the future than it does now, and may at some point, exist entirely separate from me. But, I'm not tempted to attribute an "I" to it.

When a system achieves sufficient complexity, we have a tendency to reify it. That bias might be called "parenthood". But in the spirit of not giving preference to biological human people, why would you consider your business to be less alive than a turkey? I agree that there's a point where we start to call it a conscious agent, and if left to itself, your business would act irrationally, and possibly die, but it -would- act. This just means that you are not finished programming yet. If there's no possibility of ever calling your business conscious, given mind-bogglingly clever planning, then we can all give up and go back to bed now. My point is that I think we have a tendency to attribute agency to things because it is useful to treat as if they had agency, even if they really don't. If you can predict accurately using the wrong model, you may be wrong but at least you're predicting accurately.

Does your offspring know that you refer to it as "it"?

Du-dun, tsh! That was in reference to my general, hypothetical offspring. I usually use personal pronouns for my own three. Although they would probably get the humor if I did--the replication/improvement is going smoothly. They are already used to my speeches about psychology, artificial intelligence, and evolution, and quantum physics, even though they are eight, six, and two. They frequently accuse me of being (or possibly actually believe that I am) an alien, a robot, or both. I take that as a compliment, seeing as I am the one who planted that idea.

When a system achieves sufficient complexity, we have a tendency to reify it. I don't know what that bias is called.

Me neither, but the fundamental attribution bias is (I think) related to it.

That is, I suspect that the same mechanisms that leave me predisposed to treat an observed event as evidence of a hypothesized attribute of an entity (even when it's much more strongly evidence of a distributed function of a system) also leave me predisposed to treat the event as evidence of a hypothesized entity.

Labels aside, it's not a surprising property: when it came to identifying entities in our ancestral environment, I suspect that false negatives exerted more negative selection pressure than false positives.

I think the tendency to treat events as evidence of entities more than is warranted is called "agency bias," or "delusions of agency" when it's unusuallly strong.

I just realized somehting ironic: that by a certain perspective, the cognitive module that resembles an naive view of conciousness might be one that does often not reside in the brain at all: the To Do list.

People defending the view that the mind has general rather than specialized devices tend to focus on things like learning, and say things like ”The immune system ... contains a broad learning system ... An alternative would be to have specialized immune modules for different diseases...” But this confuses specialization for things with specialization for function. Even though the immune system is capable of learning, it is still specialized for defending the body against harmful pathogens.

In addition, the immune system does have specialized modules for different diseases: antibodies.

In addition, the immune system does have specialized modules for different diseases: antibodies.

But it's still general in the sense that it can build antibodies it has never built before, against pathogens is has never before encountered. So in some sense it's a general module for building specialized modules.

the principle of comparative advantage says it's better for a country to specialize in the products it's best at producing.

This is not quite right, just as saying "evolution changes organisms so that they act to preserve the species" is not quite right.

The reason it is called comparative advantage rather than absolute advantage is that a country could be better at everything and benefit from trade. The classic example is a high powered executive who is better at typing than his* secretary. It may still be better for him to to employ a secretary than to do his own typing. He could lose more by spending time typing letters - time that could be better spent making lucrative deals - than the cost of the secretary.

*An old example, reproduced in original form.

Then the goal of lesswrong (in this framework) seems to make brain act like it contains command and control center which corrects for errors caused by another parts of brain. And the list of errors includes the idea that brain contains command and control center. Sophisticated.

Hm, yes. The brain is like an egalitarian cooperative, some of whose members are literate. We want the cooperative to write down goals and policies in a guiding document (or several documents, in several languages), which the literate members can consult and use to guide their behavior and the behavior of their peers.

Careful, you're merging two different metaphors from the article. As you point out, the brain does not have a central module that is in control of all the others. But the brain does have a large collection semi-distinct modules, many of which appear to have significant control over various other modules.

So yeah, to become more rational, you're adjusting some parts of your brain's modules to compensate for and/or override some of the lousy data coming out of other modules. But that doesn't make the adjusted modules take command over the non-adjusted ones; a sense of irrational fear of spiders might come from your hindbrain and be adjusted by your forebrain, but that doesn't mean that your forebrain is also taking over or overriding the hindbrain's job of noticing when you've stubbed your toe.

This post is out of order with the previous one?

The previous post was promoted, which apparently changes the "posted" date to the time when it got promoted.

Regarding:

Libet says that in ”the traditional view of conscious will”, conscious will would appear at the onset or before brain activity. But "before" is impossible. The module that's making the decision to move the wrist is a part of the brain, and it has to have some physical existence. There's just no way that the conscious decision could come before the brain activity.

What if the accepted intuition regarding the relationships of our minds and bodies is wrong? What if our minds act through our brains to control our bodies, but are really independent of any particular physical body?

If it is true that there are many alternate reality universes, then perhaps we have multiple iterations of instances of our specific, personal dna sequences; the physical organisms encoded by our dna sequences in different universes may be collectivized by groups of minds sharing similar senses of identity including similar physical traits distributed across a spectrum of alternate realities.

To our own ways of seeing things, we each have many bodies sharing many minds distributed across many universes. It appears (to ourselves) as if our minds resemble energetic fields 'attuned' to our specific physical organisms, but capable of read/write/command operations across a spectrum of other organisms, such that those organisms most resembling 'our own' organisms are the easiest for us to operate.

Then, as we see it, it may be possible to have the will to do something before we can locate a brain/body able to act in response to our will.

Enjoy!

Here's something that might be special about the brain modules that have access to the vocal chords: they have a voice. I, for one, consider that pretty special.

Unless you're using voice-recognition technology, why should the module in charge of your fingers be so worked up about the importance of the voice?

They are pretty much the same modules - the linguistic modules and their close associates, including much that is associated with rationality.