The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."
Furthermore, I've noticed that while LessWrong in general seems to be very strong on the psychological or "black box" side of cognitive science, there isn't as much discussion of neuroscience here. This is somewhat understandable. Our current understanding of neuroscience is frustratingly incomplete, and too much journalism on neuroscience is sensationalistic nonsense. However, I think what we do know is worth knowing. (And part of what makes much neuroscience journalism annoying is that it makes a big deal out of things that are totally unsurprising, given what we already know.)
My qualifications to do this: while my degrees are in philosophy, for awhile in undergrad I was a neuroscience major, and ended up taking quite a bit of neuroscience as a result. This means I can assure you that most of what I say here is standard neuroscience which could be found in an introductory textbook like Nichols, Martin, Wallace, & Fuchs' From Neuron to Brain (one of the text books I used as an undergraduate). The only things that might not be totally standard are the conjecture I make about how complex currently-poorly-understood areas of the brain are likely to be, and also some of the points I make in criticism of Eliezer at the end (though I believe these are not a very big jump from current textbook neuroscience.)
One of the main themes of this article will be specialization within the brain. In particular, we know that the brain is divided into specialized areas at the macro level, and we understand some (though not very much) of the micro-level wiring that supports this specialization. It seems likely that each region of the brain has its own micro-level wiring to support its specialized function, and in some regions the wiring is likely to be quite complex.
1. Specialization of brain regions
One of the best-established facts about the brain is that specific regions handle specific functions. And it isn’t just that in each individual, specific brain regions handle specific functions. It’s also that which regions handle which functions is consistent across individuals. This is an extremely well-established finding, but it’s worth briefly summarizing some of the evidence for it.
One kind of evidence comes from experiments involving direct electrical stimulation of the brain. This cannot ethically be done on humans without a sound medical reason, but it is used with epileptic patients in order to determine the source of the problem, which is necessary in order to treat epilepsy surgically.
In epileptic patients, stimulating certain regions of the brain (known as the primary sensory areas) causes the patient to report sensations: sights, sounds, feelings, smells, and tastes. Which sensations are caused by stimulating which regions of the brain is consistent across patients. This is the source of the “Penfield homunculus,” a map of brain regions which, when stimulated, result in touch sensations which patients describe as feeling like they come from particular parts of the body. Stimulating one region, for example, might consistently lead to a patient reporting a feeling in his left foot.
Regions of the brain associated with sensations are known as sensory areas or sensory cortex. Other regions of the brain, when stimulated, lead to involuntary muscle movements. Those areas are known as motor areas or motor cortex, and again, which areas correspond to which muscles is consistent across patients. The consistency of the mapping of brain regions across patients is important, because it’s evidence of an innate structure to the brain.
An even more significant kind of evidence comes from studies of patients with brain damage. Brain damage can produce very specific ability losses, and patients with damage to the same areas will typically have similar ability losses. For example, the rear most part of the human cerebral cortex is the primary visual cortex, and damage to it results in a phenomenon known as cortical blindness. That is to say, the patient is blind in spite of having perfectly good eyes. Their other mental abilities may be unaffected.
That much is not surprising, given what we know from studies involving electrical stimulation, but ability losses from brain damage can be strangely specific. For example, neuroscientists now believe that one function of the temporal lobe is to recognize objects and faces. A key line of evidence for this is that patient with damage to certain parts of the temporal lobe will be unable to identify those things by sight, even though they may be able to describe the objects in great detail. Here is neurologist Oliver Sacks’ description of an interaction with one such patient, the titular patient in Sacks’ book The Man Who Mistook His Wife For a Hat:
‘What is this?’ I asked, holding up a glove.
‘May I examine it?’ he asked, and, taking it from me, he proceeded to examine it as he had examined the geometrical shapes.
‘A continuous surface,’ he announced at last, ‘infolded on itself. It appears to have’—he hesitated—’five outpouchings, if this is the word.’
‘Yes,’ I said cautiously. You have given me a description. Now tell me what it is.’
‘A container of some sort?’
‘Yes,’ I said, ‘and what would it contain?’
‘It would contain its contents!’ said Dr P., with a laugh. ‘There are many possibilities. It could be a change purse, for example, for coins of five sizes. It could ...’
I interrupted the barmy flow. ‘Does it not look familiar? Do you think it might contain, might fit, a part of your body?’
No light of recognition dawned on his face. (Later, by accident, he got it on, and exclaimed, ‘My God, it’s a glove!’)
The fact that damage to certain parts of the temporal lobe results in an inability to recognize objects contains an extremely important lesson. For most of us, recognizing objects requires no effort or thought as long as we can see the object clearly. Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform. Certainly it never occurred to me before I studied neuroscience that object recognition might require a special brain region. But it turns out that tasks that seem easy to us can in fact require such a specialized region.
Another example of this fact comes from two brain regions involved in language, Broca’s area and Wernicke's area. Damage to each area leads to distinct types of difficulties with language, known as Broca’s aphasia and Wernicke’s aphasia, respectively. Both are strange conditions, and my description of them may not give a full sense of what they are like. Readers might consider searching online for videos of interviews Broca’s aphasia and Wernicke’s aphasia patients to get a better idea of what the conditions entail.
Broca’s aphasia is a loss of ability to produce language. In one extreme case, one of the original cases studied by Paul Broca, a patient was only able to say the word “tan.” Other patients may have a less limited vocabulary, but still struggle to come up with words for what they want to say. And even when they can come up with individual words, they may be unable to put them into sentences. However, patients with Broca’s aphasia appear to have no difficulty understanding speech, and show awareness of their disability.
Wernicke’s aphasia is even stranger. It is often described as an inability to understand language while still being able to produce language. However, while patients with Wernicke's aphasia may have little difficulty producing complete, grammatically correct sentences, the sentences tend to be nonsensical. And Wernicke’s patients often act as if they are completely unaware of their condition. A former professor of mine once described a Wernicke’s patient as sounding “like a politician,” and from watching a video of an interview with the patient, I agreed: I was impressed by his ability to confidently utter nonsense.
The fact of these two forms of aphasia suggest that Broca’s area and Wernicke’s area have two very important and distinct roles in our ability to produce and understand language. And I find this fact strange to write about. Like object recognition, language comes naturally to us. As I write this, my intuitive feeling is that the work I am doing comes mainly in the ideas, plus making a few subtle stylistic decisions. I know from neuroscience that I would be unable to write this if I had significant damage to either region. Yet I am totally unconscious of the work they are doing for me.
2. Complex, specialized wiring within regions
“Wiring” is a hard metaphor to avoid when talking about the brain, but it is also a potentially misleading one. People often talk about “electrical signals” in the brain, but unlike electrical signals in human technology, which involves movement of electrons between the atomics of the conductor, signals in the human brain involve movement of ions and small molecules across cell membranes and between cells.
Furthermore, the first thing most people who know a little bit about neuroscience will think of when they hear the word “wiring” is axons and dendrites, the long skinny projections along which signals are transmitted from neuron to neuron. But it isn’t just the layout of axons and dendrites that matters. Ion channels, and the structures that transport neurotransmitters across cell membranes, are also important.
These can vary a lot at the synapse, the place where two neurons touch. For example, synapses vary in strength, that is to say, the strength of the one neuron’s effect on the other neuron. Synapses can also be excitatory (activity in one leads increased activity in the other) or inhibitory (activity in one leads to decreased activity in the other). And this is just a couple of the ways synapses can vary; the details can be somewhat complicated, and I’ll give one example of how the details can be complicated later.
I say all this just to make clear what I mean when I talk about how the brain’s “wiring.” By “wiring,” I mean all the features of the physical structures that connect neurons to each other and which are relevant for understanding how the brain works. I mean to all the things I’ve mentioned above, and anything I may have omitted. It’s important to have a word to talk about this wiring, because what (admittedly little) we understand about how the brain works we understand in terms of this wiring.
For example, the nervous system actually first begins processing visual information in the retina (the part of the eye at the back where our light receptors are). This is done by what’s known as the center-surround system: a patch of light receptors, when activated, excites one neuron, but nearby patches of light receptors, when activated, inhibit that same neuron (sometimes, the excitatory roles and inhibitory roles are reversed).
The effect of this is that what the neurons are sensitive to is not light itself, but contrast. They’re contrast detectors. And what allows them to detect contrast isn’t anything magical, it’s just the wiring, the way the neurons are connected together.
This technique of getting neurons to serve specific functions based on how they are wired together shows up in more complicated ways in the brain itself. There’s one line of evidence for specialization of brain regions that I saved for this section, because it also tells us about the details of how the brain is wired. That line of evidence is recordings from the brain using electrodes.
For example, during the 50’s David Hubel and Torsten Weisel did experiments where they paralyzed the eye muscles of animals, stuck electrodes in primary visual areas of the animals’ brains, and then showed the animals various images to see which images would cause electrical signals in the animals’ primary visual areas. It turned out that the main thing that causes electrical signals in the primary visual areas is lines.
In particular, a given cell in the primary visual area will have a particular orientation of line which it responds to. It appears that the way these line-orientation detecting cells work is that they receive input from several contrast detecting cells which, themselves, correspond to regions of the retina that are themselves all in a line. A line in the right position and orientation will activate all of the contrast-detecting cells, which in turn activates the line-orientation detecting cell. A line in the right position but wrong orientation will activate only one or a few contrast-detecting cells, not enough to activate the line-orientation detecting cell.
[If this is unclear, a diagram like the one on Wikipedia may be helpful, though Wikipedia's diagram may not be the best.]
Another example of a trick the brain does with neural wiring is locating sounds using what are called “interaural time differences.” The idea is this: there is a group of neurons that receives input from both ears, and specifically responds to simultaneous input from both ears. However, the axons running from the ears to the neurons in this group of cells vary in length, and therefore they vary in how long it takes them to get a signal from each ear.
This means that which cells in this group respond to a sound depends on whether or not the sound reaches reaches the ears at the same time or at different times, and (if at different times) on how big the time difference is. If there’s no difference, that means the sound came from directly ahead or behind (or above or below). A big difference, with the sound reaching the left ear first, indicates the sound came from the left. A big difference, with the sound reaching the right ear first, indicates the sound came from the left. Small differences indicate something in between.
[A diagram might be helpful here too, but I'm not sure where to find a good one online.]
I’ve made a point to mention these bits of wiring because they’re cases where neuroscientists have a clear understanding of how it is that a particular neuron is able to fire only in response to a particular kind of stimulus. Unfortunately, cases like this are relatively rare. In other cases, however, we at least know that particular neurons respond specifically to more complex stimuli, even though we don’t know why. In rats, for example, there are cells in the hippocampus that activate only when the rat is in a particular location; apparently their purpose is to keep track of the rat’s location.
The visual system gives us some especially interesting cases of this sort. We know that the primary visual cortex sends information to other parts of the brain in two broadly-defined pathways, the dorsal pathway and the ventral pathway. The dorsal pathway appears to be responsible for processing information related to position and movement. Some cells in the dorsal pathway, for example, fire only when an animal sees an object moving in a particular direction.
The most interesting cells of this sort that neuroscientists have found so far, though, are probably some of the cells in the medial temporal lobe, which is part of the ventral pathway. In one study (Quiroga et al. 2005), researchers took epileptic patients who had had electrodes implanted in their brains in order to locate the source of their epilepsy and showed them pictures of various people, objects, and landmarks. What the researchers found is that the neuron or small group of neurons a given electrode was reading from typically only responded to pictures of one person or thing.
Furthermore, a particular electrode often got readings from very different pictures of a single person or thing, but not similar pictures of different people or things. In one notorious example, they found a neuron that they could only get to respond to either pictures of actress Halle Berry or the text “Halle Berry.” This included drawings of the actress, as well as pictures of her dressed as Catwoman (a role which she had recently performed at the time the study was performed), but not other drawings or other pictures of Catwoman.
What’s going on here? Based on what we know about the wiring of contrast-detectors and orientation-detectors the following conjecture seems highly likely: if we were to map out the brain completely and then follow the path along which visual information is transmitted, we would find that neurons gradually come to be wired together in more and more complex ways, to allow them to gradually become specific to more and more complex features of visual images. This, I think, is an extremely important inference.
We know that experience can impact the way the brain is wired. In fact, some aspects of the brain’s wiring seem to have evolved specifically to be able to change in response to experience (the main wiring of that sort we know about is called Hebbian synapses, but the details aren’t important here). And it is actually somewhat difficult to draw a clear line between features of the brain that are innate and features of the brain that are the product of learning, because some fairly basic features of the brain depend on outside cues in order to develop.
Here, though, I’ll use the word “innate” to refer to features of the brain that will develop given the overwhelming majority of the conditions animals of a given species actually develop under. Under that definition, a “Halle Berry neuron” is highly unlikely to be innate, because there isn’t enough room in the brain to have a neuron specific to every person a person might possibly learn about. Such neural wiring is almost certainly the result of learning.
But importantly, the underlying structure that makes such learning possible is probably at least somewhat complicated, and also specialized for that particular kind of learning. This is because such person-specific and object-specific neurons are not found in all regions of the brain, there must be something special about the medial temporal lobe that allows such learning to happen there.
Similar reasoning applies to regions of the brain that we know even less about. For example, it seems likely that Broca’s area and Wernicke’s area both contain specialized wiring for handling language, though we have little idea how that wiring might perform its function. Given that humans seem to have a considerable innate knack for learning language (Pinker 2007), it again seems likely that the wiring is somewhat complicated.
3. On some problematic comments by Eliezer
I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute. But here, I want to point out some problematic neuroscience-related comments in Eliezer's debate with Robin Hanson:
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it. And the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now the complexity that it does have it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not. (And I’m not saying it’s that small because it’s 750 megabytes, I’m saying it’s gotta be that small because at least 90% of the 750 megabytes is junk and there’s only 30,000 genes for the whole body, never mind the brain.)
That something that simple can be this powerful, and this hard to understand, is a shock. But if you look at the brain design, it’s got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on, it just doesn’t really look all that complicated. It’s very powerful. It’s very mysterious. What can say about it is that it probably involves 1,000 different deep, major, mathematical insights into the nature of intelligence that we need to comprehend before we can build it.
Though this is not explicit, there appears to be an inference here that, in order for something so simple to be so powerful, it must incorporate many deep insights into intelligence, though we don’t know what most of them are. There are several problems with this argument.
References:
Pinker, S. 2007. The Language Instinct. Harper Perennial Modern Classics.
Quiroga, R. Q. et al. 2005. Invariant visual representation by single neurons in the human brain. Nature, 435, 1102-1107.
As a life sciences person, I'm surprised you didn't address this. "Junk" DNA is almost certainly not junk. It exercises very fine tuned control over the expression of genes. Enhancers) can be located hundreds of thousands of bases away from a gene and still affect expression of that gene. It's effect surely isn't anywhere near that of the TATA box, but these sections of "Junk DNA" surely affect things. We're only beginning to understand micro RNAs and all the ways in which they act, but I would be astonished if humans (And most animals, since we conserve a lot) had been lugging around 90% of their genome for no apparent reason. That claim is so staggering as to require a great deal of evidence to start with, and I would argue most of the evidence points the other way.
Perhaps you will be astonished by parasitic DNA. It is pretty astonishing.
EDIT: ah, right: the fun video that introduced me to this.