The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."
The "his" in this paragraph is very ambiguous. Which of the two does the lack of neuroscience knowledge apply to?
[…] they set about looking for ways to make an operating system do most of what the current version of Windows does while being more [compressed].
We have an actual example of this here (also, the last progress report). The punchline is "Personnal computing in one book" (400 pages × 50 lines per page means 20K lines of code). It is meant to do basically the work of Windows + Office + IE + Outlook. And the compilers are included in those 20 thousand lines.
They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.
Well, no.
They do look for ways to maximize code recycling. However, the result is not less power. On the contrary, they achieve unmatched flexibility. Two examples:
Upvoted.
I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute.
Be careful about the direction of causation here!
I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute.
Be careful about the direction of causation here!
The direction of causality: Forward in time!
Sure, commitment effects and identity considerations are going to tend to increase agreement. However this can't weaken the extent to which a donation having been made is evidence of agreement at that time. Agreement may have increased since then but they clearly had some other reason at the time. (I don't think acausal trade with their future post-commitment influenced selves really comes into it!)
Another reason given for why human intelligence must be simple, is that we've only had time for a few complex evolutionary adaptations since we split off from other primates. Chimps clearly aren't particularly adapted to, say, doing math, so our ability to do math must come from a combination of some kind of General Intelligence, which can be applied to all kinds of tasks (what Eliezer called "the master trick"), and maybe a few specific adaptations.
But it recently occurred to me that, even if the human brain hasn't had time to gain a lot of complex functions since splitting from chips, it's entirely possible that the chimp brain has lost a lot of complex functions. My guess would be that our ancestors started becoming anomalously intelligent a long time ago, and only the human line has continued to get smarter, while all our relatives have "reverted to the mean", so to speak.
Could anyone with more knowledge on the subject tell me whether this is reasonable? Even if it's pure conjecture, it seems like the mere possibility would nullify that particular argument for human intelligence being simple / general.
Another video from the related links that's definitely worth watching. Some of the results in this video have been mentioned before on LW, but seeing them in action is incredible.
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
As a life sciences person, I'm surprised you didn't address this. "Junk" DNA is almost certainly not junk. It exercises very fine tuned control over the expression of genes. Enhancers) can be located hundreds of thousands of bases away from a gene and still affect expression of that gene. It's effect surely isn't anywhere near that of the TATA box, but these sections of "Junk DNA" surely affect things. We're only beginning to understand micro RNAs and all the ways in which they act, but I would be astonished if humans (And most animals, since we conserve a lot) had been lugging around 90% of their genome for no apparent reason. That claim is so staggering as to require a great deal of evidence to start with, and I would argue most of the evidence points the other way.
I would be astonished if humans had been lugging around 90% of their genome for no apparent reason.
Perhaps you will be astonished by parasitic DNA. It is pretty astonishing.
EDIT: ah, right: the fun video that introduced me to this.
Actually, we can guess that a piece of DNA is nonfunctional if it seems to have undergone neutral evolution (roughly, accumulation of functionally equivalent mutations) at a rate which implies that it was not subject to any noticeable positive selection pressure over evolutionary time. Leaving aside transposons, repetition, and so on, that's a main part of how we know that large amounts of junk DNA really are junk.
There are pieces of DNA that preserve function, but undergo neutral evolution. A recent nature article found a not-protein-coding piece of DNA that is necessary for development (by being transcribed into RNA), that had underwent close to neutral evolution from zebrafish to human, but maintained functional conservation. That is, taking the human transcript and inserting it into zebrafish spares it from death, indicating that (almost) completely different DNA performs the same function, and that using simple conservation of non-neutral evolution we probably can't detect it.
I read that and thought: how much is that?
ATP may release roughly 14 kcal/mol; the actual amount varies with local conditions (heat, temperature, pressure) and chemical concentrations. An adult human body contains very roughly 50 trillion cells. However, different cells divide at very different rates. I tried to find data estimating total divisions in the body; this Wikipedia article says 10,000 trillion divisions per human lifetime. (Let one lifetime = 80 years ~~ 2.52e9 seconds).
Now, what is a trillion? I shall assume the short scale, trillion = 1e12, and weep at the state of popular scientific literature that counts in "thousands of trillions" instead of actual numbers. This means 10000e12=1e16 cell divisions per lifetime.
We then get 14e3 / 6.022e23 (Avogadro's constant) = 2.325e-20 calories per extra base pair replication; and 1e16 / 2.52e9 = 3.968e6 cell divisions per second on average. So an extra base pair in all of your cells costs 9.23e-14 calories per day. Note those are actual calories, not the kilocalories sometimes called "calories" marked on food. Over your lifetime an extra base pair would cost 2.325e-4 calories. That's 0.00000235 kilocalories in ...
I enjoyed this post and would appreciate more like it, in particular, more like parts 2 and 1.
The argument from the small size of the genome is more plausible, especially if Eliezer is thinking in terms of Kolmogorov complexity, which is based on the size of the smallest computer program needed to build something. However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.
I think the question in the original debate could be formulated as something like: How big a solution, in the amount of program code we need to write, do we need to find to be able to implement a self-improving artificial intelligence that will, given an amount of sensory input and opportunities to interact with its environment comparable to that of a human growing up, grow up to human-level cognition.
I don't see how the other sources of information needed for brain development is a counterargument here. Once you have a machine learning system that will bootstrap itself to sentience given a few years of video feed, you've done pretty well indeed.
I don't also see how the compressibility argument is supposed to work without further qualifiers....
You seem to have forgotten to mention with due emphasis that the childhood brain damage gets compensated for, with entirely different regions of the brain taking over function normally done by other regions of the brain. That may be prompting the reader to make fallacious conclusion that the functionality of the brain is to larger extent stored genetically than re-generated during early learning, with the same regions being used - by the learner - for same tasks due to their proximity to inputs and outputs and the long range wiring, but with little other s...
it turns out that tasks that seem easy to us can in fact require such a specialized region
The causation is reversed here - I'm sure you know this but I think it's worth pointing out explicitly.
It's because we have a specialized region for some tasks, that they seem to easy to us. (Things seem hard when we need to concentrate on them, and when we don't know how to do them.) And we have a specialized region for these tasks because they need it: we can't do them well using our "general-purpose thinking" even if we do concentrate. (People with Broca's or Vernicke's aphasia can't compensate well using conscious thought.)
A big difference, with the sound reaching the left ear first, indicates the sound came from the left. A big difference, with the sound reaching the right ear first, indicates the sound came from the left.
I think it should be "came from the right" in the second sentence.
However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.
This is a good point, and can be taken even further...
Do you know Prof. Gazzaniga? He gave a Gifford lecture about the brain in the university of Edinburgh.
The following are two simple questions regarding two of the split brain experiments that are still puzzling me.
I'm referring to the 3rd video in the series(the one about the Interpreter). Immediately after the "snow scene & chicken claw" split brain experiment there are another two(the video is already at the correct mark):
http://www.youtube.com/watch?feature=player_detailpage&v=mJKloz2vwlc#t=1108s
J.W.(split brain) sees two words: bell(
However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.
I think this is relevant if the topic is uploading, but I think it misses the point if the topic is "how hard is it to produce a self-improving intelligence". The brain of a newborn has not yet received much input from the outside world, yet has the ability to learn. This places a (rather large, IMO) upper bound on how much complexity is necessary to produce an intelligent system.
I'm having trouble following your criticism. When you say that the human brain does not necessarily use any deep insights into intelligence does that mean that it only uses processes for which we already have functionally equivalent algorithms and it's merely a problem of scale to interpret all the functions the brain implements? Or do you disagree with the definition of deep? I have no doubt that given enough time we could create an algorithm to functionally emulate a human brain; but would we understand the overall algorithm beyond "this part run...
They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.
It seems to me that finding out how to recycle code and making particular pieces of code do many different things is exactly how to build a more powerful (and general) operating system.
But it turns out that tasks that seem easy to us can in fact require such a specialized region.
In a way, this really shouldn't be surprising at all. Any common mental task which has its own specialised region will of course seem easy to us, because it doesn't make use of the parts of the brain we are consciously aware of.
It's not clear to me how badly EY erred. It seems that he was comparing the size of code designed by humans to the size of code "designed" by evolution, which would seem to be his primary mistake. I also concur that he shouldn't get from complexity of the evolved brain to "number of insights needed to create AI" (charitably: he doesn't claim to know the exact conversion ratio, but in principle there should be one).
I agree with your "information in genes+environment" although the example of needing light (and other inputs) for ...
The citations in this comment are new science, so please take them with at least a cellar of salt:
There are recent studies, especially into Wernicke's area, which seem to implicate alternate areas for linguistic processing : http://explore.georgetown.edu/news/?ID=61864&PageTemplateID=295 (they don't cite the actual study, but I think it might be here http://www.pnas.org/content/109/8/E505.full#xref-ref-48-1); and this study (http://brain.oxfordjournals.org/content/124/1/83.full) is also interesting.
Terrence Deacon's 'The Symbolic Species' also argumes...
"Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform."
This is a very general lesson, the depth and applicability of which can scarcely be overstated. A few thoughts:
1) In its more banal form it plagues us as the Curse of Knowledge. I'm an English teacher in South Korea, and despite six months on the job I have to constantly remind myself that just because it's easy for me to say "rollerskating lollipops" doesn't mean it's inherently easy. ...
I just wanted to add that despite ChrisHallquist's background in philosophy, the things he details in this article are very much up to date regarding our current knowledge of the brain.
I'm currently studying at Germany's #1 or #2 university with respect to the quality of their scientific education in psychology and I can vouch that I couldn't find a single mistake in his article while being quite familiar with everything he detailed.
The only thing I would emphasize or add is that there is indeed very good evidence that the brain can only develop correctly ...
...The fact that damage to certain parts of the temporal lobe results in an inability to recognize objects contains an extremely important lesson. For most of us, recognizing objects requires no effort or thought as long as we can see the object clearly. Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform. Certainly it never occurred to me before I studied neuroscience that object recognition might require a special brain region. But it turns out that tasks that s
Taking the Solomonoff Induction route, making a highly compressible version of Windows is EXACTLY what it would mean to have "discovered deep insights into how to build powerful operating systems". Similarly, getting a lot of power out of simple designs is exactly what it means to have insights, at least in the context of Solomonoff Induction (and from there to science in general).
That said, great article. The contribution of complexity from the environment is a major issue, even as early as the womb, and that was definitely an important oversight.
You make a good point that the genome does not completely determine how the brain is set up. Environment is hugely influential in how things develop. I recently read that things the expression of our genes can be influence by things called transcription factors, as well as process called slicing and transposition. Each of these things is effected by the environment. For example, if your a small rat pup and your Mom licks you then this will trigger a cascade of hormones that will end up changing your DNA and your amygdala so that you release less stress hor...
What would be the consequence for someone suffering damage to the interpreter module of the brain?
I think I had read the argument that the complexity of the human genome is a upper bound to the "innate" part of the complexity of the human brain before (either somewhere on Language Log or in Motion Mountain by Christoph Schiller, IIRC).
(Of course, this assumes a narrower definition of innate than yours, because "the overwhelming majority of the conditions animals of a given species actually develop under" share lots of complexity. In particular, according to that definition linguistic universals are innate by definition, whether or n...
The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."
Furthermore, I've noticed that while LessWrong in general seems to be very strong on the psychological or "black box" side of cognitive science, there isn't as much discussion of neuroscience here. This is somewhat understandable. Our current understanding of neuroscience is frustratingly incomplete, and too much journalism on neuroscience is sensationalistic nonsense. However, I think what we do know is worth knowing. (And part of what makes much neuroscience journalism annoying is that it makes a big deal out of things that are totally unsurprising, given what we already know.)
My qualifications to do this: while my degrees are in philosophy, for awhile in undergrad I was a neuroscience major, and ended up taking quite a bit of neuroscience as a result. This means I can assure you that most of what I say here is standard neuroscience which could be found in an introductory textbook like Nichols, Martin, Wallace, & Fuchs' From Neuron to Brain (one of the text books I used as an undergraduate). The only things that might not be totally standard are the conjecture I make about how complex currently-poorly-understood areas of the brain are likely to be, and also some of the points I make in criticism of Eliezer at the end (though I believe these are not a very big jump from current textbook neuroscience.)
One of the main themes of this article will be specialization within the brain. In particular, we know that the brain is divided into specialized areas at the macro level, and we understand some (though not very much) of the micro-level wiring that supports this specialization. It seems likely that each region of the brain has its own micro-level wiring to support its specialized function, and in some regions the wiring is likely to be quite complex.
1. Specialization of brain regions
One of the best-established facts about the brain is that specific regions handle specific functions. And it isn’t just that in each individual, specific brain regions handle specific functions. It’s also that which regions handle which functions is consistent across individuals. This is an extremely well-established finding, but it’s worth briefly summarizing some of the evidence for it.
One kind of evidence comes from experiments involving direct electrical stimulation of the brain. This cannot ethically be done on humans without a sound medical reason, but it is used with epileptic patients in order to determine the source of the problem, which is necessary in order to treat epilepsy surgically.
In epileptic patients, stimulating certain regions of the brain (known as the primary sensory areas) causes the patient to report sensations: sights, sounds, feelings, smells, and tastes. Which sensations are caused by stimulating which regions of the brain is consistent across patients. This is the source of the “Penfield homunculus,” a map of brain regions which, when stimulated, result in touch sensations which patients describe as feeling like they come from particular parts of the body. Stimulating one region, for example, might consistently lead to a patient reporting a feeling in his left foot.
Regions of the brain associated with sensations are known as sensory areas or sensory cortex. Other regions of the brain, when stimulated, lead to involuntary muscle movements. Those areas are known as motor areas or motor cortex, and again, which areas correspond to which muscles is consistent across patients. The consistency of the mapping of brain regions across patients is important, because it’s evidence of an innate structure to the brain.
An even more significant kind of evidence comes from studies of patients with brain damage. Brain damage can produce very specific ability losses, and patients with damage to the same areas will typically have similar ability losses. For example, the rear most part of the human cerebral cortex is the primary visual cortex, and damage to it results in a phenomenon known as cortical blindness. That is to say, the patient is blind in spite of having perfectly good eyes. Their other mental abilities may be unaffected.
That much is not surprising, given what we know from studies involving electrical stimulation, but ability losses from brain damage can be strangely specific. For example, neuroscientists now believe that one function of the temporal lobe is to recognize objects and faces. A key line of evidence for this is that patient with damage to certain parts of the temporal lobe will be unable to identify those things by sight, even though they may be able to describe the objects in great detail. Here is neurologist Oliver Sacks’ description of an interaction with one such patient, the titular patient in Sacks’ book The Man Who Mistook His Wife For a Hat:
The fact that damage to certain parts of the temporal lobe results in an inability to recognize objects contains an extremely important lesson. For most of us, recognizing objects requires no effort or thought as long as we can see the object clearly. Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform. Certainly it never occurred to me before I studied neuroscience that object recognition might require a special brain region. But it turns out that tasks that seem easy to us can in fact require such a specialized region.
Another example of this fact comes from two brain regions involved in language, Broca’s area and Wernicke's area. Damage to each area leads to distinct types of difficulties with language, known as Broca’s aphasia and Wernicke’s aphasia, respectively. Both are strange conditions, and my description of them may not give a full sense of what they are like. Readers might consider searching online for videos of interviews Broca’s aphasia and Wernicke’s aphasia patients to get a better idea of what the conditions entail.
Broca’s aphasia is a loss of ability to produce language. In one extreme case, one of the original cases studied by Paul Broca, a patient was only able to say the word “tan.” Other patients may have a less limited vocabulary, but still struggle to come up with words for what they want to say. And even when they can come up with individual words, they may be unable to put them into sentences. However, patients with Broca’s aphasia appear to have no difficulty understanding speech, and show awareness of their disability.
Wernicke’s aphasia is even stranger. It is often described as an inability to understand language while still being able to produce language. However, while patients with Wernicke's aphasia may have little difficulty producing complete, grammatically correct sentences, the sentences tend to be nonsensical. And Wernicke’s patients often act as if they are completely unaware of their condition. A former professor of mine once described a Wernicke’s patient as sounding “like a politician,” and from watching a video of an interview with the patient, I agreed: I was impressed by his ability to confidently utter nonsense.
The fact of these two forms of aphasia suggest that Broca’s area and Wernicke’s area have two very important and distinct roles in our ability to produce and understand language. And I find this fact strange to write about. Like object recognition, language comes naturally to us. As I write this, my intuitive feeling is that the work I am doing comes mainly in the ideas, plus making a few subtle stylistic decisions. I know from neuroscience that I would be unable to write this if I had significant damage to either region. Yet I am totally unconscious of the work they are doing for me.
2. Complex, specialized wiring within regions
“Wiring” is a hard metaphor to avoid when talking about the brain, but it is also a potentially misleading one. People often talk about “electrical signals” in the brain, but unlike electrical signals in human technology, which involves movement of electrons between the atomics of the conductor, signals in the human brain involve movement of ions and small molecules across cell membranes and between cells.
Furthermore, the first thing most people who know a little bit about neuroscience will think of when they hear the word “wiring” is axons and dendrites, the long skinny projections along which signals are transmitted from neuron to neuron. But it isn’t just the layout of axons and dendrites that matters. Ion channels, and the structures that transport neurotransmitters across cell membranes, are also important.
These can vary a lot at the synapse, the place where two neurons touch. For example, synapses vary in strength, that is to say, the strength of the one neuron’s effect on the other neuron. Synapses can also be excitatory (activity in one leads increased activity in the other) or inhibitory (activity in one leads to decreased activity in the other). And this is just a couple of the ways synapses can vary; the details can be somewhat complicated, and I’ll give one example of how the details can be complicated later.
I say all this just to make clear what I mean when I talk about how the brain’s “wiring.” By “wiring,” I mean all the features of the physical structures that connect neurons to each other and which are relevant for understanding how the brain works. I mean to all the things I’ve mentioned above, and anything I may have omitted. It’s important to have a word to talk about this wiring, because what (admittedly little) we understand about how the brain works we understand in terms of this wiring.
For example, the nervous system actually first begins processing visual information in the retina (the part of the eye at the back where our light receptors are). This is done by what’s known as the center-surround system: a patch of light receptors, when activated, excites one neuron, but nearby patches of light receptors, when activated, inhibit that same neuron (sometimes, the excitatory roles and inhibitory roles are reversed).
The effect of this is that what the neurons are sensitive to is not light itself, but contrast. They’re contrast detectors. And what allows them to detect contrast isn’t anything magical, it’s just the wiring, the way the neurons are connected together.
This technique of getting neurons to serve specific functions based on how they are wired together shows up in more complicated ways in the brain itself. There’s one line of evidence for specialization of brain regions that I saved for this section, because it also tells us about the details of how the brain is wired. That line of evidence is recordings from the brain using electrodes.
For example, during the 50’s David Hubel and Torsten Weisel did experiments where they paralyzed the eye muscles of animals, stuck electrodes in primary visual areas of the animals’ brains, and then showed the animals various images to see which images would cause electrical signals in the animals’ primary visual areas. It turned out that the main thing that causes electrical signals in the primary visual areas is lines.
In particular, a given cell in the primary visual area will have a particular orientation of line which it responds to. It appears that the way these line-orientation detecting cells work is that they receive input from several contrast detecting cells which, themselves, correspond to regions of the retina that are themselves all in a line. A line in the right position and orientation will activate all of the contrast-detecting cells, which in turn activates the line-orientation detecting cell. A line in the right position but wrong orientation will activate only one or a few contrast-detecting cells, not enough to activate the line-orientation detecting cell.
[If this is unclear, a diagram like the one on Wikipedia may be helpful, though Wikipedia's diagram may not be the best.]
Another example of a trick the brain does with neural wiring is locating sounds using what are called “interaural time differences.” The idea is this: there is a group of neurons that receives input from both ears, and specifically responds to simultaneous input from both ears. However, the axons running from the ears to the neurons in this group of cells vary in length, and therefore they vary in how long it takes them to get a signal from each ear.
This means that which cells in this group respond to a sound depends on whether or not the sound reaches reaches the ears at the same time or at different times, and (if at different times) on how big the time difference is. If there’s no difference, that means the sound came from directly ahead or behind (or above or below). A big difference, with the sound reaching the left ear first, indicates the sound came from the left. A big difference, with the sound reaching the right ear first, indicates the sound came from the left. Small differences indicate something in between.
[A diagram might be helpful here too, but I'm not sure where to find a good one online.]
I’ve made a point to mention these bits of wiring because they’re cases where neuroscientists have a clear understanding of how it is that a particular neuron is able to fire only in response to a particular kind of stimulus. Unfortunately, cases like this are relatively rare. In other cases, however, we at least know that particular neurons respond specifically to more complex stimuli, even though we don’t know why. In rats, for example, there are cells in the hippocampus that activate only when the rat is in a particular location; apparently their purpose is to keep track of the rat’s location.
The visual system gives us some especially interesting cases of this sort. We know that the primary visual cortex sends information to other parts of the brain in two broadly-defined pathways, the dorsal pathway and the ventral pathway. The dorsal pathway appears to be responsible for processing information related to position and movement. Some cells in the dorsal pathway, for example, fire only when an animal sees an object moving in a particular direction.
The most interesting cells of this sort that neuroscientists have found so far, though, are probably some of the cells in the medial temporal lobe, which is part of the ventral pathway. In one study (Quiroga et al. 2005), researchers took epileptic patients who had had electrodes implanted in their brains in order to locate the source of their epilepsy and showed them pictures of various people, objects, and landmarks. What the researchers found is that the neuron or small group of neurons a given electrode was reading from typically only responded to pictures of one person or thing.
Furthermore, a particular electrode often got readings from very different pictures of a single person or thing, but not similar pictures of different people or things. In one notorious example, they found a neuron that they could only get to respond to either pictures of actress Halle Berry or the text “Halle Berry.” This included drawings of the actress, as well as pictures of her dressed as Catwoman (a role which she had recently performed at the time the study was performed), but not other drawings or other pictures of Catwoman.
What’s going on here? Based on what we know about the wiring of contrast-detectors and orientation-detectors the following conjecture seems highly likely: if we were to map out the brain completely and then follow the path along which visual information is transmitted, we would find that neurons gradually come to be wired together in more and more complex ways, to allow them to gradually become specific to more and more complex features of visual images. This, I think, is an extremely important inference.
We know that experience can impact the way the brain is wired. In fact, some aspects of the brain’s wiring seem to have evolved specifically to be able to change in response to experience (the main wiring of that sort we know about is called Hebbian synapses, but the details aren’t important here). And it is actually somewhat difficult to draw a clear line between features of the brain that are innate and features of the brain that are the product of learning, because some fairly basic features of the brain depend on outside cues in order to develop.
Here, though, I’ll use the word “innate” to refer to features of the brain that will develop given the overwhelming majority of the conditions animals of a given species actually develop under. Under that definition, a “Halle Berry neuron” is highly unlikely to be innate, because there isn’t enough room in the brain to have a neuron specific to every person a person might possibly learn about. Such neural wiring is almost certainly the result of learning.
But importantly, the underlying structure that makes such learning possible is probably at least somewhat complicated, and also specialized for that particular kind of learning. This is because such person-specific and object-specific neurons are not found in all regions of the brain, there must be something special about the medial temporal lobe that allows such learning to happen there.
Similar reasoning applies to regions of the brain that we know even less about. For example, it seems likely that Broca’s area and Wernicke’s area both contain specialized wiring for handling language, though we have little idea how that wiring might perform its function. Given that humans seem to have a considerable innate knack for learning language (Pinker 2007), it again seems likely that the wiring is somewhat complicated.
3. On some problematic comments by Eliezer
I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute. But here, I want to point out some problematic neuroscience-related comments in Eliezer's debate with Robin Hanson:
Though this is not explicit, there appears to be an inference here that, in order for something so simple to be so powerful, it must incorporate many deep insights into intelligence, though we don’t know what most of them are. There are several problems with this argument.
References:
Pinker, S. 2007. The Language Instinct. Harper Perennial Modern Classics.
Quiroga, R. Q. et al. 2005. Invariant visual representation by single neurons in the human brain. Nature, 435, 1102-1107.