The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."

Furthermore, I've noticed that while LessWrong in general seems to be very strong on the psychological or "black box" side of cognitive science, there isn't as much discussion of neuroscience here. This is somewhat understandable. Our current understanding of neuroscience is frustratingly incomplete, and too much journalism on neuroscience is sensationalistic nonsense. However, I think what we do know is worth knowing. (And part of what makes much neuroscience journalism annoying is that it makes a big deal out of things that are totally unsurprising, given what we already know.)

My qualifications to do this: while my degrees are in philosophy, for awhile in undergrad I was a neuroscience major, and ended up taking quite a bit of neuroscience as a result. This means I can assure you that most of what I say here is standard neuroscience which could be found in an introductory textbook like Nichols, Martin, Wallace, & Fuchs' From Neuron to Brain (one of the text books I used as an undergraduate). The only things that might not be totally standard are the conjecture I make about how complex currently-poorly-understood areas of the brain are likely to be, and also some of the points I make in criticism of Eliezer at the end (though I believe these are not a very big jump from current textbook neuroscience.)

One of the main themes of this article will be specialization within the brain. In particular, we know that the brain is divided into specialized areas at the macro level, and we understand some (though not very much) of the micro-level wiring that supports this specialization. It seems likely that each region of the brain has its own micro-level wiring to support its specialized function, and in some regions the wiring is likely to be quite complex.

1. Specialization of brain regions

One of the best-established facts about the brain is that specific regions handle specific functions. And it isn’t just that in each individual, specific brain regions handle specific functions. It’s also that which regions handle which functions is consistent across individuals. This is an extremely well-established finding, but it’s worth briefly summarizing some of the evidence for it.

One kind of evidence comes from experiments involving direct electrical stimulation of the brain. This cannot ethically be done on humans without a sound medical reason, but it is used with epileptic patients in order to determine the source of the problem, which is necessary in order to treat epilepsy surgically. 

In epileptic patients, stimulating certain regions of the brain (known as the primary sensory areas) causes the patient to report sensations: sights, sounds, feelings, smells, and tastes. Which sensations are caused by stimulating which regions of the brain is consistent across patients. This is the source of the “Penfield homunculus,” a map of brain regions which, when stimulated, result in touch sensations which patients describe as feeling like they come from particular parts of the body. Stimulating one region, for example, might consistently lead to a patient reporting a feeling in his left foot.

Regions of the brain associated with sensations are known as sensory areas or sensory cortex. Other regions of the brain, when stimulated, lead to involuntary muscle movements. Those areas are known as motor areas or motor cortex, and again, which areas correspond to which muscles is consistent across patients. The consistency of the mapping of brain regions across patients is important, because it’s evidence of an innate structure to the brain.

An even more significant kind of evidence comes from studies of patients with brain damage. Brain damage can produce very specific ability losses, and patients with damage to the same areas will typically have similar ability losses. For example, the rear most part of the human cerebral cortex is the primary visual cortex, and damage to it results in a phenomenon known as cortical blindness. That is to say, the patient is blind in spite of having perfectly good eyes. Their other mental abilities may be unaffected. 

That much is not surprising, given what we know from studies involving electrical stimulation, but ability losses from brain damage can be strangely specific. For example, neuroscientists now believe that one function of the temporal lobe is to recognize objects and faces. A key line of evidence for this is that patient with damage to certain parts of the temporal lobe will be unable to identify those things by sight, even though they may be able to describe the objects in great detail. Here is neurologist Oliver Sacks’ description of an interaction with one such patient, the titular patient in Sacks’ book The Man Who Mistook His Wife For a Hat:

‘What is this?’ I asked, holding up a glove.

‘May I examine it?’ he asked, and, taking it from me, he proceeded to examine it as he had examined the geometrical shapes.

‘A continuous surface,’ he announced at last, ‘infolded on itself. It appears to have’—he hesitated—’five outpouchings, if this is the word.’

‘Yes,’ I said cautiously. You have given me a description. Now tell me what it is.’

‘A container of some sort?’

‘Yes,’ I said, ‘and what would it contain?’

‘It would contain its contents!’ said Dr P., with a laugh. ‘There are many possibilities. It could be a change purse, for example, for coins of five sizes. It could ...’

I interrupted the barmy flow. ‘Does it not look familiar? Do you think it might contain, might fit, a part of your body?’

No light of recognition dawned on his face. (Later, by accident, he got it on, and exclaimed, ‘My God, it’s a glove!’)

The fact that damage to certain parts of the temporal lobe results in an inability to recognize objects contains an extremely important lesson. For most of us, recognizing objects requires no effort or thought as long as we can see the object clearly. Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform. Certainly it never occurred to me before I studied neuroscience that object recognition might require a special brain region. But it turns out that tasks that seem easy to us can in fact require such a specialized region.

Another example of this fact comes from two brain regions involved in language, Broca’s area and Wernicke's area. Damage to each area leads to distinct types of difficulties with language, known as Broca’s aphasia and Wernicke’s aphasia, respectively. Both are strange conditions, and my description of them may not give a full sense of what they are like. Readers might consider searching online for videos of interviews Broca’s aphasia and Wernicke’s aphasia patients to get a better idea of what the conditions entail.

Broca’s aphasia is a loss of ability to produce language. In one extreme case, one of the original cases studied by Paul Broca, a patient was only able to say the word “tan.” Other patients may have a less limited vocabulary, but still struggle to come up with words for what they want to say. And even when they can come up with individual words, they may be unable to put them into sentences. However, patients with Broca’s aphasia appear to have no difficulty understanding speech, and show awareness of their disability.

Wernicke’s aphasia is even stranger. It is often described as an inability to understand language while still being able to produce language. However, while patients with Wernicke's aphasia may have little difficulty producing complete, grammatically correct sentences,  the sentences tend to be nonsensical. And Wernicke’s patients often act as if they are completely unaware of their condition.  A former professor of mine once described a Wernicke’s patient as sounding “like a politician,” and from watching a video of an interview with the patient, I agreed: I was impressed by his ability to confidently utter nonsense.

The fact of these two forms of aphasia suggest that Broca’s area and Wernicke’s area have two very important and distinct roles in our ability to produce and understand language. And I find this fact strange to write about. Like object recognition, language comes naturally to us. As I write this, my intuitive feeling is that the work I am doing comes mainly in the ideas, plus making a few subtle stylistic decisions. I know from neuroscience that I would be unable to write this if I had significant damage to either region. Yet I am totally unconscious of the work they are doing for me.

2. Complex, specialized wiring within regions

“Wiring” is a hard metaphor to avoid when talking about the brain, but it is also a potentially misleading one. People often talk about “electrical signals” in the brain, but unlike electrical signals in human technology, which involves movement of electrons between the atomics of the conductor, signals in the human brain involve movement of ions and small molecules across cell membranes and between cells. 

Furthermore, the first thing most people who know a little bit about neuroscience will think of when they hear the word “wiring” is axons and dendrites, the long skinny projections along which signals are transmitted from neuron to neuron. But it isn’t just the layout of axons and dendrites that matters. Ion channels, and the structures that transport neurotransmitters across cell membranes, are also important. 

These can vary a lot at the synapse, the place where two neurons touch. For example, synapses vary in strength, that is to say, the strength of the one neuron’s effect on the other neuron. Synapses can also be excitatory (activity in one leads increased activity in the other) or inhibitory (activity in one leads to decreased activity in the other). And this is just a couple of the ways synapses can vary; the details can be somewhat complicated, and I’ll give one example of how the details can be complicated later.

I say all this just to make clear what I mean when I talk about how the brain’s “wiring.” By “wiring,” I mean all the features of the physical structures that connect neurons to each other and which are relevant for understanding how the brain works. I mean to all the things I’ve mentioned above, and anything I may have omitted. It’s important to have a word to talk about this wiring, because what (admittedly little) we understand about how the brain works we understand in terms of this wiring.

For example, the nervous system actually first begins processing visual information in the retina (the part of the eye at the back where our light receptors are). This is done by what’s known as the center-surround system: a patch of light receptors, when activated, excites one neuron, but nearby patches of light receptors, when activated, inhibit that same neuron (sometimes, the excitatory roles and inhibitory roles are reversed).

The effect of this is that what the neurons are sensitive to is not light itself, but contrast. They’re contrast detectors. And what allows them to detect contrast isn’t anything magical, it’s just the wiring, the way the neurons are connected together.

This technique of getting neurons to serve specific functions based on how they are wired together shows up in more complicated ways in the brain itself. There’s one line of evidence for specialization of brain regions that I saved for this section, because it also tells us about the details of how the brain is wired. That line of evidence is recordings from the brain using electrodes.

For example, during the 50’s David Hubel and Torsten Weisel did experiments where they paralyzed the eye muscles of animals, stuck electrodes in primary visual areas of the animals’ brains, and then showed the animals various images to see which images would cause electrical signals in the animals’ primary visual areas. It turned out that the main thing that causes electrical signals in the primary visual areas is lines.

In particular, a given cell in the primary visual area will have a particular orientation of line which it responds to. It appears that the way these line-orientation detecting cells work is that they receive input from several contrast detecting cells which, themselves, correspond to regions of the retina that are themselves all in a line. A line in the right position and orientation will activate all of the contrast-detecting cells, which in turn activates the line-orientation detecting cell. A line in the right position but wrong orientation will activate only one or a few contrast-detecting cells, not enough to activate the line-orientation detecting cell.

[If this is unclear, a diagram like the one on Wikipedia may be helpful, though Wikipedia's diagram may not be the best.]

Another example of a trick the brain does with neural wiring is locating sounds using what are called “interaural time differences.” The idea is this: there is a group of neurons that receives input from both ears, and specifically responds to simultaneous input from both ears. However, the axons running from the ears to the neurons in this group of cells vary in length, and therefore they vary in how long it takes them to get a signal from each ear. 

This means that which cells in this group respond to a sound depends on whether or not the sound reaches reaches the ears at the same time or at different times, and (if at different times) on how big the time difference is. If there’s no difference, that means the sound came from directly ahead or behind (or above or below). A big difference, with the sound reaching the left ear first, indicates the sound came from the left. A big difference, with the sound reaching the right ear first, indicates the sound came from the left. Small differences indicate something in between.

[A diagram might be helpful here too, but I'm not sure where to find a good one online.]

I’ve made a point to mention these bits of wiring because they’re cases where neuroscientists have a clear understanding of how it is that a particular neuron is able to fire only in response to a particular kind of stimulus. Unfortunately, cases like this are relatively rare. In other cases, however, we at least know that particular neurons respond specifically to more complex stimuli, even though we don’t know why. In rats, for example, there are cells in the hippocampus that activate only when the rat is in a particular location; apparently their purpose is to keep track of the rat’s location.

The visual system gives us some especially interesting cases of this sort. We know that the primary visual cortex sends information to other parts of the brain in two broadly-defined pathways, the dorsal pathway and the ventral pathway. The dorsal pathway appears to be responsible for processing information related to position and movement. Some cells in the dorsal pathway, for example, fire only when an animal sees an object moving in a particular direction. 

The most interesting cells of this sort that neuroscientists have found so far, though, are probably some of the cells in the medial temporal lobe, which is part of the ventral pathway. In one study (Quiroga et al. 2005), researchers took epileptic patients who had had electrodes implanted in their brains in order to locate the source of their epilepsy and showed them pictures of various people, objects, and landmarks. What the researchers found is that the neuron or small group of neurons a given electrode was reading from typically only responded to pictures of one person or thing. 

Furthermore, a particular electrode often got readings from very different pictures of a single person or thing, but not similar pictures of different people or things. In one notorious example, they found a neuron that they could only get to respond to either pictures of actress Halle Berry or the text “Halle Berry.” This included drawings of the actress, as well as pictures of her dressed as Catwoman (a role which she had recently performed at the time the study was performed), but not other drawings or other pictures of Catwoman.

What’s going on here? Based on what we know about the wiring of contrast-detectors and orientation-detectors the following conjecture seems highly likely: if we were to map out the brain completely and then follow the path along which visual information is transmitted, we would find that neurons gradually come to be wired together in more and more complex ways, to allow them to gradually become specific to more and more complex features of visual images. This, I think, is an extremely important inference.

We know that experience can impact the way the brain is wired. In fact, some aspects of the brain’s wiring seem to have evolved specifically to be able to change in response to experience (the main wiring of that sort we know about is called Hebbian synapses, but the details aren’t important here). And it is actually somewhat difficult to draw a clear line between features of the brain that are innate and features of the brain that are the product of learning, because some fairly basic features of the brain depend on outside cues in order to develop.

Here, though, I’ll use the word “innate” to refer to features of the brain that will develop given the overwhelming majority of the conditions animals of a given species actually develop under. Under that definition, a “Halle Berry neuron” is highly unlikely to be innate, because there isn’t enough room in the brain to have a neuron specific to every person a person might possibly learn about. Such neural wiring is almost certainly the result of learning.

But importantly, the underlying structure that makes such learning possible is probably at least somewhat complicated, and also specialized for that particular kind of learning. This is because such person-specific and object-specific neurons are not found in all regions of the brain, there must be something special about the medial temporal lobe that allows such learning to happen there.

Similar reasoning applies to regions of the brain that we know even less about. For example, it seems likely that Broca’s area and Wernicke’s area both contain specialized wiring for handling language, though we have little idea how that wiring might perform its function. Given that humans seem to have a considerable innate knack for learning language (Pinker 2007), it again seems likely that the wiring is somewhat complicated.

3. On some problematic comments by Eliezer

I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute. But here, I want to point out some problematic neuroscience-related comments in Eliezer's debate with Robin Hanson:

If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it. And the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now the complexity that it does have it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not. (And I’m not saying it’s that small because it’s 750 megabytes, I’m saying it’s gotta be that small because at least 90% of the 750 megabytes is junk and there’s only 30,000 genes for the whole body, never mind the brain.)

That something that simple can be this powerful, and this hard to understand, is a shock. But if you look at the brain design, it’s got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on, it just doesn’t really look all that complicated. It’s very powerful. It’s very mysterious. What can say about it is that it probably involves 1,000 different deep, major, mathematical insights into the nature of intelligence that we need to comprehend before we can build it.

Though this is not explicit, there appears to be an inference here that, in order for something so simple to be so powerful, it must incorporate many deep insights into intelligence, though we don’t know what most of them are. There are several problems with this argument.

First of all, it is not true that the fact that that brain is divided into only 52 major areas is evidence that it is not very complex, because knowing about the complexity of its macroscopic organization tells us nothing about the complexity of its microscopic wiring. The brain consists of tens of billions of neurons, and a single neuron can make hundreds of synapses with other neurons. The details of how synapses are set up vary greatly. The fact is that under a microscope, the brain at least looks very complex.

The argument from the small size of the genome is more plausible, especially if Eliezer is thinking in terms of Kolmogorov complexity, which is based on the size of the smallest computer program needed to build something. However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment. We have good reason to think this is how the brain is actually set up, not just in cases we would normally associate with learning and memory, but with some of the most basic and near-universal features of the brain. For example, in normal mammals, the neurons in the visual cortex are organized into “ocular dominance columns,” but these fail to form if the animal is raised in darkness.

More importantly, there is no reason to think getting a lot of power out of a relatively simple design requires insights into the nature of intelligence itself. To use Eliezer’s own example of Windows Vista: imagine if, for some reason, Microsoft decided that it was very important for the next generation of its operating system to be highly compressible. Microsoft tells this to its programmers, and they set about looking for ways to make an operating system do most of what the current version of Windows does while being more compressible. They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.

In this case, would we say that they had discovered deep insights into how to build powerful operating systems? Well no. And there’s reason to think that life on Earth uses similar tricks to get a lot of apparent complexity out of relatively simple genetic codes. Genes code for protein. In a phenomenon known as “alternative splicing,” there may be several ways to combine the parts of a gene, allowing one gene to code for several proteins. And even a single, specific protein may perform several roles within an organism. A receptor protein, for example, may be plugged into different signaling cascades in different parts of an organism.

Eliezer's comments about the complexity of brain are only a small part of his arguments in the debate, but I worry that comments like these by people concerned with the future of Artificial Intelligence are harmful insofar as they may lead some people (particularly neuroscientists) to conclude AI-related futurism is a bunch of confusions based in ignorance. I don't think it is, but a neuroscientist taking the Hanson-Yudkowsky debate as an introduction to the issues could easily conclude that.

Of course, that's not the most important reason for people with an interest in AI to understand the basics of neuroscience. The most important reason is that understanding some neuroscience will help clarify your thinking about the rest of cognitive science.

References:

Pinker, S. 2007. The Language Instinct. Harper Perennial Modern Classics.

Quiroga, R. Q. et al. 2005. Invariant visual representation by single neurons in the human brain. Nature, 435, 1102-1107.

New Comment
103 comments, sorted by Click to highlight new comments since: Today at 10:16 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."

The "his" in this paragraph is very ambiguous. Which of the two does the lack of neuroscience knowledge apply to?

[…] they set about looking for ways to make an operating system do most of what the current version of Windows does while being more [compressed].

We have an actual example of this here (also, the last progress report). The punchline is "Personnal computing in one book" (400 pages × 50 lines per page means 20K lines of code). It is meant to do basically the work of Windows + Office + IE + Outlook. And the compilers are included in those 20 thousand lines.

They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.

Well, no.

They do look for ways to maximize code recycling. However, the result is not less power. On the contrary, they achieve unmatched flexibility. Two examples:

  • Their graphic stack draws everything, from characters on a page to the very windowing system. As a result, if you suddenly want to rotate a window (and its content) by any angle, you just need to write 2 lines of code to add the feature.
  • Their language stac
... (read more)
5FeepingCreature12y
Compression is actually a very important skill for programmers that tends to correlate with experience. More compressed code -> less redundancy -> less space for inconsistencies to arise on modification.
1Armok_GoB12y
Now this is really interesting! If we take this and extrapolate it the same way as we did our previous miss-conception, it seems like having so little complexity to work with is an important factor in causing the generality! predictions from this: * Species with lower mutation rate and more selection pressure, while it should be much better of at first glance, would have the advance much further before reaching similar amounts of generality. (makes for great scifi!) * Approaches to AI involving very minimal, accessible on a low level from within, and entangling the AI with every other function on the actual physical computer, may be a better idea than one would otherwise expect. (which, depending on what you'd expect, might still not be much)
3loup-vaillant12y
Probably. My favourite example here are first class functions in programming languages. There is talk about "currying", "anonymous functions", "closures"… that needlessly complicates the issue. They look like additional features which complicate the language and make people wonder why they would ever need that. On the other hand, you can turn this reasoning on its head if you think of functions as mere mathematical objects, like integers. Now the things you can do with integers you can't do with functions (besides arithmetic), are restrictions. Lifting those restrictions would make your programming language both simpler and more powerful. Now there's a catch: all complexity does not lie in arbitrary quirks or restrictions. You need a minimum amount to do something useful. So I'm not sure to what extent the "simplify as much as you can" can generalize. It sure is very helpful when writing programs, ---------------------------------------- There's a catch however: the complexity I remove here was completely destructive. Here using the general formulae for edge cases merely lifted restrictions! I'm not sure that's always the case. You do need a minimum amount of complexity to do anything. For instance, Windows could fit in a book if Microsoft cared about that, so maybe that's why it (mostly) doesn't crash down in flames. On the other hand, something that really cannot fit in less than 10 thousand books is probably beyond our comprehension. Hopefully a seed FAI will not need more than 10 books. But we still don't know everything about morality and intelligence.
0FeepingCreature12y
Intuitively, the complexity of the program would have to match the complexity of the problem domain. If it's less, you get lack of features and customizability. If it's more, you get bloat.
0Armok_GoB12y
What about 10 thousand cat videos? :p But yea, upvoted.

Upvoted.

I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute.

Be careful about the direction of causation here!

I agree with Singularity Institute positions on a great deal. After all, I recently made my first donation to the Singularity Institute.

Be careful about the direction of causation here!

The direction of causality: Forward in time!

Sure, commitment effects and identity considerations are going to tend to increase agreement. However this can't weaken the extent to which a donation having been made is evidence of agreement at that time. Agreement may have increased since then but they clearly had some other reason at the time. (I don't think acausal trade with their future post-commitment influenced selves really comes into it!)

1FeepingCreature12y
Technically: having made a donation can skew perception of past agreement. Memory is part recollection, part reconstruction.
0wedrifid12y
Please see grandparent.
3FeepingCreature12y
My apology.
0ViEtArmis12y
Really, it can go either way, since saying things without being forced increases your belief in them (I imaging donating to charity does, as well).

Another reason given for why human intelligence must be simple, is that we've only had time for a few complex evolutionary adaptations since we split off from other primates. Chimps clearly aren't particularly adapted to, say, doing math, so our ability to do math must come from a combination of some kind of General Intelligence, which can be applied to all kinds of tasks (what Eliezer called "the master trick"), and maybe a few specific adaptations.

But it recently occurred to me that, even if the human brain hasn't had time to gain a lot of complex functions since splitting from chips, it's entirely possible that the chimp brain has lost a lot of complex functions. My guess would be that our ancestors started becoming anomalously intelligent a long time ago, and only the human line has continued to get smarter, while all our relatives have "reverted to the mean", so to speak.

Could anyone with more knowledge on the subject tell me whether this is reasonable? Even if it's pure conjecture, it seems like the mere possibility would nullify that particular argument for human intelligence being simple / general.

7knb12y
I don't think we should assume that the vast difference between human and primate achievement is caused by a vast difference in human and primate general intelligence. There are vast differences in achievement between human groups, but only fairly modest intelligence differences. Some ape experts estimated the IQ of chimps as above 75. Chimps have been known to use some surprisingly advanced technologies, almost comparable to the more primitive human groups (like Tasmanian Aborigines). Sometimes, chimps notice other chimps doing these things and copy them, but they don't teach these new techniques to each other in any comprehensive way. This strikes me as the main advantage humans have, rather than raw mental firepower. It seems likely to me that the main biological adaptation that made humans so much more successful at learning was not general intelligence, but rather a more advanced theory of mind and communication skills that followed from it. I'm sure general intelligence improvements played a role, but my guess is g was secondary to social learning.
5[comment deleted]12y
7Pentashagon12y
It looks like there is a new/emerging field called primate archaeology aimed at studying more than just the hominins. If other primates show more advanced prehistoric tool use than they have now it would be evidence for your hypothesis. http://www.sciencedaily.com/releases/2009/07/090715131437.htm
3aaronde12y
Cool. Looks like there's no evidence yet one way or another. Isn't there a place lesswrongers post predictions like this to check their calibration? I'd be willing to go on record with, say, 75% confidence that the most recent common ancestor of humans and chimps had more advanced tool use than modern chimps.
4gwern12y
Yes, PredictionBook.com, but your prediction would be difficult since it has no clear due date, and no clear judging - it'd be hard to date most recent common ancestor, date tools, or prove that the former was using the latter.
0[anonymous]12y
Shouldn't we consider the fact that no such evidence has yet emerged as strong evidence against this hypothesis? Advanced prehistoric tool use would be a huge discovery - and, one would expect, confer an evolutionary advantage resulting in widespread evidence across space and time. We observe neither of these things

Another video from the related links that's definitely worth watching. Some of the results in this video have been mentioned before on LW, but seeing them in action is incredible.

7roland12y
One thing in the video that I don't understand: http://www.youtube.com/watch?feature=player_detailpage&v=lfGwsAdS9Dc#t=338s (The video is already at the relevant time offset). The patient sees two words: bell(right brain) and music(left brain) and afterwards when shown several pictures he points to the bell with the right hand. Since the right hand is controlled by the left hemisphere I would expect it to point to the musical icons. In this case the right hand seems to be controlled by the right hemisphere and goes to bell. I emailed Prof. Gazzaniga and he replied that either hemisphere can control either hand. Still puzzled because I read in some papers that the hands are in fact controlled by the opposite hemisphere. See also the following comment of mine: http://lesswrong.com/lw/d27/neuroscience_basics_for_lesswrongians/73gz
5Kawoomba12y
Good catch! There are several potential explanations: 1) That particular experiment was repeated several times for TV recording purposes, until they got a good take. JW (the patient) may have pointed with the left hand at first, but knowing what was happening would switch to his right hand since he is, in fact, right handed. 1b) Similarly, that particular experiment has generally been done before, priming the patient. Note how in the subsequent experiment they say "this is being done for the first time", implying that the previous experiments are long established repeat experiments for the camera. 2) A severed corpus callosum does not mean that all connections between the two hemispheres are cut. Here's a paper saying that for eight adult humans born with complete agenesis of the corpus callosum, fMRIs still show that both hemispheres seem equally synchronized despite having no corpus callosum (!). Never underestimate the neuroplasticity of the human brain, even when the damage happened as an adult, especially in a case like JW's in which a significant amount of time has passed since the surgery. In other words, it's more of a trend of the hemispheres working separately without a corpus callosum, not an absolute rule. 3) In so far as we know anything about brain lateralisation, we know that the left motor cortex does in fact only control the right side of the body (not counting cranial nerves). Decades of stroke victims have made that case. However, there are many steps involved in "controlling your right hand". The motor axons that go down to your right hand do in fact all originate in your left hemisphere, but they just execute the command, they don't come up with it. The planning, intention, high level computing that goes on that eventually yields that command is not solely located in that hemisphere. Simplified, you can have an emotional reaction in your right hemisphere that eventually translates to your left motor cortex pulling the trigger and depola

If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.

As a life sciences person, I'm surprised you didn't address this. "Junk" DNA is almost certainly not junk. It exercises very fine tuned control over the expression of genes. Enhancers) can be located hundreds of thousands of bases away from a gene and still affect expression of that gene. It's effect surely isn't anywhere near that of the TATA box, but these sections of "Junk DNA" surely affect things. We're only beginning to understand micro RNAs and all the ways in which they act, but I would be astonished if humans (And most animals, since we conserve a lot) had been lugging around 90% of their genome for no apparent reason. That claim is so staggering as to require a great deal of evidence to start with, and I would argue most of the evidence points the other way.

I would be astonished if humans had been lugging around 90% of their genome for no apparent reason.

Perhaps you will be astonished by parasitic DNA. It is pretty astonishing.

EDIT: ah, right: the fun video that introduced me to this.

5zslastman12y
The proposition that DNA can be parasitic is fairly bulletproof, but it's essentially impossible to prove any given piece of DNA nonfunctional - you can't test it over an evolutionary timescale under all relevant conditions. Selfish DNA elements very frequently get incorporated into regulatory networks, and in fact are a major driving force behind evolution, particularly in animals, where the important differences are mostly in regulatory DNA.

Actually, we can guess that a piece of DNA is nonfunctional if it seems to have undergone neutral evolution (roughly, accumulation of functionally equivalent mutations) at a rate which implies that it was not subject to any noticeable positive selection pressure over evolutionary time. Leaving aside transposons, repetition, and so on, that's a main part of how we know that large amounts of junk DNA really are junk.

There are pieces of DNA that preserve function, but undergo neutral evolution. A recent nature article found a not-protein-coding piece of DNA that is necessary for development (by being transcribed into RNA), that had underwent close to neutral evolution from zebrafish to human, but maintained functional conservation. That is, taking the human transcript and inserting it into zebrafish spares it from death, indicating that (almost) completely different DNA performs the same function, and that using simple conservation of non-neutral evolution we probably can't detect it.

4philh12y
I'm having trouble working out the experimental conditions here. I take it they replaced a sequence of zebrafish DNA with its human equivalent, which seemed to have been undergoing nearly neutral selection, and didn't observe developmental defects. But what was the condition where they did observe defects? If they just removed that section of DNA, that could suggest that some sequence is needed there but its contents are irrelevant. If they replaced it with a completely different section of DNA that seems like it would be a lot more surprising.
7zslastman12y
You are correct - given the information above it is possible (though unlikely) that the DNA was just there as a spacer between two other things and its content was irrelevant. However the study controlled for this - they also mutated the zebrafish DNA in specific places and were able to induce identical defects as with the deletion. What's happening here is that the DNA is transcribed into non protein-coding RNA. This RNA's function and behavior will be determined by, but impossible to predict from, it's sequence - you're dealing not only with the physical process of molecular folding, which is intractable, but with its interactions with everything else in the cell, which is intractability squared. So there is content there but it's unreadable to us and thus appears unconstrained. If we had a very large quantum computer we could perhaps find the 3d structure "encoded' by it and its interaction partners, and would see the conservation of this 3d structure from fish to human.
4philh12y
That's interesting. I guess my next question is, how confident are we that this sequence has been undergoing close-to-neutral selection? I ask because if it has been undergoing close-to-neutral selection, that implies that almost all possible mutations in that region are fitness-neutral. (Which is why my thoughts turned to "something is necessary, but it doesn't matter what". When you call that unlikely, is that because there's no known mechanism for it, or you just don't think there was sufficient evidence for the hypothesis, or something else?) But... according to this study they're not, which leaves me very confused. This doesn't even feel like I just don't know enough, it feels like something I think I know is wrong.
2dekelron12y
There is no "neutral" evolution, as all DNA sequences are subject to several constraints, such as maintaining GC content and preventing promoters) from popping out needlessly. There is also large variability of mutation rates along different DNA regions. Together, this results in high variance of "neutral" mutation rate, and because of huge genome, making it (probably) impossible to detect even regions having quarter of neutral mutation rate. I think this is the case here. This extends what zslastsman written regarding structure.
2zslastman12y
We can't be totally confident. I'd guess that if you did a sensitive test of fitness (you'd need a big fish tank and a lot of time) you'd find the human sequence didn't rescue the deletion perfectly. They've done this recently in c elegans - looking at long term survival in the population level, and they find a huge number of apparantly harmless mutations are very detrimental at the population level. The reason I'd say it was unlikely is just that spacers of that kind aren't common (I don't know of any that aren't inside genes). If there were to sequences on either side that needed to bend around to eachother to make contact, it could be plausible, but since they selected by epigenetic marks, rather than sequence conservation, it would be odd and novel if they'd managed to perfectly delete such a spacer (actually it would be very interesting of itself.) I think you are being confused by two things 1) The mutation I said they made was deliberately targeted to a splice site, which are constrained (though you can't use them to identify sequences because they are very small, and so occur randomly outside functional sequence all the time) 2) You are thinking too simplistically about sequence constraint. RNA folds by wrapping up and forming helices with itself, so the effect of a mutation is dependent on the rest of the sequence. Each mutation releases constraint on other base pairs, and introduces it to others. So as this sequence wanders through sequence space it does so in a way that preserves relationships, not absolute sequence. From it's current position in sequence space, many mutations would be detrimental. But those residues may get the chance to mutate later on, when other residues have relieved them. This applies to proteins as well by the way. Proteins are far more conserved in 3d shape than in 2d sequence.
2dekelron12y
The DNA in the zebrafish was deleted, and the human version was inserted later, without affecting the main DNA (probably using a "plasmid"). Without the human DNA "insert", there was a developmental defect. with either the human DNA insert or the original zebrafish DNA (as an insert), there was no developmental defect, leading to the conclusion that the human version is functionally equivalent to the zebrafish version.
0A1987dM12y
How do we know whether, by replacing the insert with a random sequence of base pairs the same length, there would be no developmental defect either?
0dekelron12y
There are several complications addressed in the article, which I did not describe. Anyway, using a "control vector" is considered trivial, and I believe they checked this.
8zslastman12y
That's true of protein coding sequence, but things are a little bit more difficult for regulatory DNA because 1)Regulatory DNA is under MUCH less sequence constraint - the relevant binding proteins are not individually fussy about their binding sites 2)Regulatory Networks have a lot of redundancy 3)Regulatory Mutations can be much more easily compensated for by other mutations - because we're dealing with analog networks, rather than strings of amino acids. Regulatory evolution is an immature field but it seems that an awful lot of change can occur in a short time. The literature is full of sequences that have an experimentally provable activity (put them on a plasmid with a reporter gene and off it goes) and yet show no conservation between species. There's probably a lot more functional sequence that won't just work on it's own on a plasmid, or show a noticable effect from knockouts. It may be that regulatory networks are composed of a continous distribution from a few constrained elements with strong effects down to lots of unconstrained weak ones. The latter will be very, very difficult to distinguish from Junk DNA.
7Strilanc12y
Data with lots of redundancy does, in a certain sense, contain a lot of junk. Junk that, although it helps reliably transmit the data, doesn't change the meaning of the data (or doesn't change it by much).
2A1987dM12y
Yeah. What's relevant to this discussion is complexity, not number of base pairs.
0RobertLumley12y
This actually isn't necessarily true. If there is a section of the genome A that needs to act on another section of the genome C with section B in between, and A needs to act on C with a precise (or relatively so) genomic distance between them, B can neutrally evolve, even though it's still necessary for the action of A on C, since it provides the spacing.
1Baughn12y
Thus, serving a purely structural function. In that case the complexity in bits of B, for length N, becomes log2(N) instead of 2*N. It's not quite 0, but it's a lot closer.
0Mitchell_Porter12y
The only definitively nonfunctional DNA is that which has been deleted. "Nonfunctional DNA" is temporarily inactive legacy code which may at any time be restored to an active role.
3philh12y
In the context "how complicated is a human brain?", DNA which is currently inactive does not count towards the answer. That said (by which I mean "what follows doesn't seem relevant now that I've realised the above, but I already wrote it"), Is inactive DNA more likely to be restored to an active role than to get deleted? I'm not sure it makes sense to consider it functional just because it might start doing something again. When you delete a file from your hard disk, it could theoretically be restored until the disk space is actually repurposed; but if you actually wanted the file around, you just wouldn't have deleted it. That's not a great analogy, but... My gut says that any large section of inactive DNA is more likely to become corrupted than to become reactivated. A corruption is pretty much any mutation in that section, whereas I imagine reactivating it would require one of a small number of specific mutations. Counterpoint: a corruption has only a small probability of becoming fixed in the population; if reactivation is helpful, that still only has a small probability of becoming fixed, but it's a much higher small probability. Counter-counterpoint: no particular corruption would need to be fixed in the whole population. If there are several corruptions at independent 10% penetration each, a reactivating mutation will have a hard time becoming fixed.
4Mitchell_Porter12y
Here's the concept I wanted: evolutionary capacitance.
0RobertLumley12y
Yeah, the most common protein (fragment) in the human genome is reverse transcriptase, which is only used by viruses (as far as we know). It just gets in there from old (generationally speaking) virus infections. But I'd still be surprised if we haven't figured out some way to use those fragments left in there.
0Manfred12y
Why should we? It's like postulating a human evolutionary use for acne - the zits don't have to be useful for us, they're already plenty useful for the bacterium that makes them happen. Do you mean in the sense that we've adapted to having all this junk in our DNA, and would now get sick if it was all removed? That's possible (though onions seem to be fine with more/less of it).
7RobertLumley12y
Well it takes something like 8 ATP per base pair to replicate DNA, so that's a pretty hefty metabolic load. Which means, on average, it needs to compensate for that selection pressure somehow. The viruses in our lab will splice out a gene that doesn't give benefit in maybe around 5 generations? Humans are much better at accurate replication, but I'd still think it would lose it fairly quickly.

I read that and thought: how much is that?

ATP may release roughly 14 kcal/mol; the actual amount varies with local conditions (heat, temperature, pressure) and chemical concentrations. An adult human body contains very roughly 50 trillion cells. However, different cells divide at very different rates. I tried to find data estimating total divisions in the body; this Wikipedia article says 10,000 trillion divisions per human lifetime. (Let one lifetime = 80 years ~~ 2.52e9 seconds).

Now, what is a trillion? I shall assume the short scale, trillion = 1e12, and weep at the state of popular scientific literature that counts in "thousands of trillions" instead of actual numbers. This means 10000e12=1e16 cell divisions per lifetime.

We then get 14e3 / 6.022e23 (Avogadro's constant) = 2.325e-20 calories per extra base pair replication; and 1e16 / 2.52e9 = 3.968e6 cell divisions per second on average. So an extra base pair in all of your cells costs 9.23e-14 calories per day. Note those are actual calories, not the kilocalories sometimes called "calories" marked on food. Over your lifetime an extra base pair would cost 2.325e-4 calories. That's 0.00000235 kilocalories in ... (read more)

2RobertLumley12y
Well first off, I'm going entirely on memory with the 8 ATP number. I'm 90% certain it is at least that much, but 16 is also sticking in my head as a number. The best reference I can give you is probably that you get ~30 ATP per glucose molecule that you digest. (Edit: that's for aerobic metabolism, not anaerobic. Anaerobic is more like 2 ATP per glucose molecule.) The other thing to consider is that typically, your cell divisions are going to be concentrated in the first 1/6th of your life or so. So averaging it over 80 years may be a little disingenuous. Cells certainly still grow later in life, but they slow down a lot. I agree splicing out a single base is not likely to generate any measurable fitness advantage. But if you have 90% of your genome that is "junk", that's 0.9*25.5 kcal/day, which is about 1% of a modern daily diet, and probably a much larger portion of the diet in the ancestral environment. Requiring eating 1% more food over the course of one's lifetime seems to me like it would be significant, or at least approaching it. But what do I know, I'm just guessing, really. Thanks for the math though, that was interesting.
0DanArmak12y
Using 16 ATP instead of 8, and 80/6=13.33 years, won't change the result significantly. It seems off by many orders of magnitude (to claim natural selection based on energy expenditure). 1% of diet is a selectable-sized difference, certainly. But the selection pressure applies to individual base pair mutations, which are conserved or lost independently of one another (ignoring locality effects etc). The total genome size, or total "junk" size, can't generate selection pressure unless different humans have significantly different genome size. But it looks like that's not the case.
4RobertLumley12y
I am confused why you believe this. Evolution need not splice out bases one base at a time. You can easily have replication errors that could splice out tens of thousands of bases at a time.
2Douglas_Knight12y
No, replication is more robust than that. I have never heard of large insertion or deletion in replication, except in highly repetitive regions (and there only dozens of bases, I think). However, meiotic crossover is sloppy, providing the necessary variation. Speaking of meiotic crossover, non-coding DNA provides space between coding regions, reducing the likelihood of crossover breaking them.
0RobertLumley12y
Meiotic crossover is what I meant, actually. Generally the polymerase itself wouldn't skip unless the region is highly repetitive, you're right.
0DanArmak12y
...I seem to have assumed the number of BP changed by small or point mutations would make up the majority of all BP changed by mutations. (I was probably primed because you started out by talking about the energy cost per BP.) Now that you've pointed that out, I have no good reason for that belief. I should look for quantified sources of information. OK, so now we need to know 1) what metabolic energy order of magnitude is big enough for selection to work, and 2) the distribution of mutation sizes. I don't feel like looking for this info right now, maybe later. It does seem plausible that for the right values of these two variables, the metabolic costs would be big enough for selection to act against random nonfunctional mutations. But apparently there is a large amount of nonfunctional DNA, and also I've read that some nonfunctional mutations are fixated by drift (i.e. selection is zero on net). That's some evidence for my guess that some (many?) nonfunctional mutations, maybe only small ones, are too small for selection pressure due to metabolic costs to have much effect.
0RobertLumley12y
Yeah, I will definitely concede small ones have negligible costs. And I'm not sure the answer to 1) is known, and I doubt 2) is well quantified. A good rule of thumb for 2) though is that "if you're asking whether or not it's possible, it probably is". At least that's the rule of thumb I've developed from asking questions in classes.
2gwern12y
Cool calculation, but just off the top of my head, you would also need energy for DNA repair processes, which my naive guess would be O(n) in DNA length and is constantly ongoing.
0DanArmak12y
Good point. And there may well be other ways that "junk" genes are metabolically expensive. For instance real genes probably aren't perfectly nonfunctional. Maybe they make the transcription or expression of other genes more (or less) costly, or they use up energy and materials being occasionally transcribed into nonfunctional bits of RNA or protein, or bind some factors, or who knows what else. And then selection can act on that. But the scale just seems too small for any of that matter in most cases - because it has to matter at the scale of a single base pair, because that's the size of a mutation and point mutations can be conserved or lost independently of one another. What is the metabolic cost (per cell per second) scale or order of magnitude where natural selection begins to operate?

I enjoyed this post and would appreciate more like it, in particular, more like parts 2 and 1.

The argument from the small size of the genome is more plausible, especially if Eliezer is thinking in terms of Kolmogorov complexity, which is based on the size of the smallest computer program needed to build something. However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.

I think the question in the original debate could be formulated as something like: How big a solution, in the amount of program code we need to write, do we need to find to be able to implement a self-improving artificial intelligence that will, given an amount of sensory input and opportunities to interact with its environment comparable to that of a human growing up, grow up to human-level cognition.

I don't see how the other sources of information needed for brain development is a counterargument here. Once you have a machine learning system that will bootstrap itself to sentience given a few years of video feed, you've done pretty well indeed.

I don't also see how the compressibility argument is supposed to work without further qualifiers.... (read more)

0Decius12y
Of course, to load the data from the floppy you need a bare minimum of firmware. The disk by itself doesn't do anything. By the same token, the human genome doesn't do anything on it's own. It requires a human(?*) egg to develop anything significant. *I'm not aware of any experiments where an animal cell was cloned with the DNA from a different species- what happens if you put a sheep nucleus into a horse egg and implant it in a horse? Is it like trying to load PROdos directly onto modern hardware?
0buybuydandavis12y
There was/is a plan to resurrect the Woolly Mammoth, by putting Mammoth dna into an elephant egg. The Wikipedia page on Interspecific Pregnancy links to an example on a Giant Panda genome put in a rabbit egg and brought to term in a cat. http://en.wikipedia.org/wiki/Interspecific_pregnancy http://www.ncbi.nlm.nih.gov/pubmed/12135908 Also, I think Ventner denucleates the cell of something to serve as the host for his synthetic dna.
0Decius12y
It looks like transfer of embryos between species has been successful, but not clones. I wouldn't call the panda/rabbit clone in a cat "brought to term", more like "had promising results".
0[anonymous]12y
One of those two animals seems far easier to bring to term in a cat. The other I am imagining bursting out of the stomach "Aliens" style because there just isn't any room left!
0Risto_Saarelma12y
Does this have any relevance for estimating things at an order-of-magnitude level? To run Windows Vista on some arbitrary future hardware, you'd need an x86 emulator, but an x86 emulator should take much less work and lines of code than a Windows Vista, so you'd still want to eyeball the amount of complexity mostly by the amount of stuff in the Vista part.
1Decius12y
An x86 emulator needs all of the same code as the x86 hardware has, plus some. It needs some amount of firmware to load it as well. It's not hard to emulate hardware, given a robust operating system, but to emulate x86 hardware and run Vista you need the hardware of the future, the firmware of that hardware, an operating system of the future, the emulator for x86 hardware, and Vista. I'm saying that -all of that- is what needs to be included in the 360kB of data for the data to be deterministic- I can easily create a series of bits that will produce wildly different outputs when used as the input to different machines- or an infinite series of machines that take the same bit and produce every possible output. And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn't coded for at all in the included DNA (vitamin B12, for example)?
0Risto_Saarelma12y
I'm not a biologist or a chemist, you tell me. I'd start by asking about the point in evolutionary history of life on Earth those things first showed up for the rough estimate for the evolutionary search-work needed to come up with something similar. Also, still talking about estimating the amount of implementation work here, not the full stack of semantics in the general case down to atoms. Yes, you do need to know the computer type, and yes, I was implicitly assuming that the 360kB floppy was written to be run on some specific hardware much like a barebones modern PC (that can stay running for an extremely long time and has a mass memory with extremely large capacity). The point of the future computing hardware being arbitrary was important for the estimation question of the x86. Barring the odd hardware with a RUN_WINDOWS_VISTA processor opcode, if I had to plan being dropped in front of an alien computer, having to learn how to use it from a manual, and then implementing an x86 emulator and a Vista-equivalent OS on it, I'd plan for some time to learn how to use the thing, a bit longer to code the x86 emulator now that I have an idea how to do it, and so much more time (RUN_WINDOWS_VISTA opcodes having too low a Solomonoff prior to be worth factoring into plans) doing the Vista-alike that I might as well not even include the first two parts in my estimation. (Again, the argument as I interpret is is specifically about eyeballing the implementation complexity of "software" on one specific platform and using that to estimate the implementation complexity of a software of similar complexity for another specific platform. The semantics of bitstrings for the case where the evaluation context can be any general thing whatsoever shouldn't be an issue here.)
0Decius12y
I think that they are features of eukaryotic cells, and I can't find an example of eukaryotic life that doesn't have them. Animals, plants, and fungi all have both, while bacteria and archaea have neither. In general, mitochondria are required for metabolism in cellular creatures, while the Golgi apparatus is required to create many complex organic structures. Forced to create a comp sci comparison, I would compare them to the clock and the bus driver. If you had to plan on being dropped in front of an alien computer that didn't use a clock, execute one instruction at a time, or provide an interrupt mechanism for input devices or peripherals, could you still create an emulator that simulated those hardware pieces? We'll set a moderate goal- your task is to run DX7 well enough for it to detect an emulated video card. (That being one of the few things which are specifically Windows, rather than generally 'OS') For fun, we can assume that the alien computer accepts inputs and then returns, in constant time, either the output expected of a particular Turing machine or a 'non-halting' code. From this alone, it can be proven that it is not a Turing machine; however you don't have access to what it uses instead of source code, so you cannot feed it a program which loops if it exits, and exits if it loops, but you can program any Turing machine you can describe, along with any data you can provide.
0Risto_Saarelma12y
I'm not sure where you're going with this. I'm not seeing anything here that would argue that the complexity of the actual Vista implementation would increase ten or a hundred-fold, just some added constant difficulties in the beginning. Hypercomputation devices might mess up the nice and simple theory, but I'm waiting for one to show up in the real world before worrying much about those. I'm also still pretty confident that human cells can't act as Turing oracles, even if they might squeeze a bit of extra computation juice from weird quantum mechanical tricks. Mechanisms that showed up so early in evolution that all eukaryotes have them took a lot less evolutionary search than the features of human general intelligence, so I wouldn't rank them anywhere close to the difficulty of human general intelligence in design discoverability.
0Decius12y
Organelles might not be Turing oracles, but they can compute folding problems in constant time or less. And I was trying to point out that you can't implement Vista and an x86 emulator on any other hardware for less than you can Implement Vista directly on x86 hardware. EDIT: Considering that the evolutionary search took longer to find mitochondria from the beginning of life than it took to find intelligence from mitochondria, I think that mitochondria are harder to make from primordial soup than intelligence is to make from phytoplankton.
0JohnEPaton12y
I think he's saying that the brain is not just the genome. What you see as an adult brain also represents a host of environmental factors. Since these environmental factors are complex, so then is the brain. Yes you could probably use some machine learning algorithm to build a brain with the input of a video feed. But this says relatively little about how the brain actually develops in nature.
1Risto_Saarelma12y
That's just the thing. It makes a big difference whether we're talking about a (not necessarily human) brain in general, or a specific, particular brain. Artificial intelligence research is concerned about being able to find any brain design it can understand and work with, while neuroscience is concerned with the particulars of human brain anatomy and often the specific brains of specific people. Also, I'd be kinda hesitant to dismiss anything that involves being able to build a brain as "saying relatively little" about anything brain-related.
2JohnEPaton12y
Thanks for the clarification. You're right that artificial intelligence and neuroscience are two different fields.

You seem to have forgotten to mention with due emphasis that the childhood brain damage gets compensated for, with entirely different regions of the brain taking over function normally done by other regions of the brain. That may be prompting the reader to make fallacious conclusion that the functionality of the brain is to larger extent stored genetically than re-generated during early learning, with the same regions being used - by the learner - for same tasks due to their proximity to inputs and outputs and the long range wiring, but with little other s... (read more)

it turns out that tasks that seem easy to us can in fact require such a specialized region

The causation is reversed here - I'm sure you know this but I think it's worth pointing out explicitly.

It's because we have a specialized region for some tasks, that they seem to easy to us. (Things seem hard when we need to concentrate on them, and when we don't know how to do them.) And we have a specialized region for these tasks because they need it: we can't do them well using our "general-purpose thinking" even if we do concentrate. (People with Broca's or Vernicke's aphasia can't compensate well using conscious thought.)

Do you know where to find a good Neuroscience forum?

2scaphandre12y
Not quite a community forum, but I find comments at http://neuroskeptic.blogspot.co.uk can be interesting. There is also reddit.com/r/neuro and reddit.com/r/cogsci - but they are both fledgling and susceptible to popsci.

A big difference, with the sound reaching the left ear first, indicates the sound came from the left. A big difference, with the sound reaching the right ear first, indicates the sound came from the left.

I think it should be "came from the right" in the second sentence.

However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.

This is a good point, and can be taken even further... (read more)

Do you know Prof. Gazzaniga? He gave a Gifford lecture about the brain in the university of Edinburgh.

The following are two simple questions regarding two of the split brain experiments that are still puzzling me.

I'm referring to the 3rd video in the series(the one about the Interpreter). Immediately after the "snow scene & chicken claw" split brain experiment there are another two(the video is already at the correct mark):

http://www.youtube.com/watch?feature=player_detailpage&v=mJKloz2vwlc#t=1108s

  1. J.W.(split brain) sees two words: bell(

... (read more)

However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.

I think this is relevant if the topic is uploading, but I think it misses the point if the topic is "how hard is it to produce a self-improving intelligence". The brain of a newborn has not yet received much input from the outside world, yet has the ability to learn. This places a (rather large, IMO) upper bound on how much complexity is necessary to produce an intelligent system.

I'm having trouble following your criticism. When you say that the human brain does not necessarily use any deep insights into intelligence does that mean that it only uses processes for which we already have functionally equivalent algorithms and it's merely a problem of scale to interpret all the functions the brain implements? Or do you disagree with the definition of deep? I have no doubt that given enough time we could create an algorithm to functionally emulate a human brain; but would we understand the overall algorithm beyond "this part run... (read more)

They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.

It seems to me that finding out how to recycle code and making particular pieces of code do many different things is exactly how to build a more powerful (and general) operating system.

But it turns out that tasks that seem easy to us can in fact require such a specialized region.

In a way, this really shouldn't be surprising at all. Any common mental task which has its own specialised region will of course seem easy to us, because it doesn't make use of the parts of the brain we are consciously aware of.

It's not clear to me how badly EY erred. It seems that he was comparing the size of code designed by humans to the size of code "designed" by evolution, which would seem to be his primary mistake. I also concur that he shouldn't get from complexity of the evolved brain to "number of insights needed to create AI" (charitably: he doesn't claim to know the exact conversion ratio, but in principle there should be one).

I agree with your "information in genes+environment" although the example of needing light (and other inputs) for ... (read more)

2torekp12y
I think the post needlessly interprets EY as bounding the complexity of the brain to at most that of the genome. Of course the brain's complexity reflects the learning environment - but, important as that is, in this context it doesn't seem very relevant. It's not that hard to "raise" an AI in an environment much like those humans are raised in. (Maybe that's not a good way to create Friendly AI - or maybe it is - but I take it EY's argument was about AI in general.)
1Jonathan_Graehl12y
I agree. More charitably, even, he could be counterarguing: (not a quote and possibly not a faithful paraphrase)
0thomblake12y
Or "make it highly compressed" perhaps.

LessWrongers maybe? Instead of LessWrongians?

I think the title would be stronger without any mention of LessWrong-readers at all; we can presume to be the intended audience by mere virtue of it being posted here.

Better than "rationalists".

The citations in this comment are new science, so please take them with at least a cellar of salt:

There are recent studies, especially into Wernicke's area, which seem to implicate alternate areas for linguistic processing : http://explore.georgetown.edu/news/?ID=61864&PageTemplateID=295 (they don't cite the actual study, but I think it might be here http://www.pnas.org/content/109/8/E505.full#xref-ref-48-1); and this study (http://brain.oxfordjournals.org/content/124/1/83.full) is also interesting.

Terrence Deacon's 'The Symbolic Species' also argumes... (read more)

"Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform."

This is a very general lesson, the depth and applicability of which can scarcely be overstated. A few thoughts:

1) In its more banal form it plagues us as the Curse of Knowledge. I'm an English teacher in South Korea, and despite six months on the job I have to constantly remind myself that just because it's easy for me to say "rollerskating lollipops" doesn't mean it's inherently easy. ... (read more)

0A1987dM12y
Succeeded at the first attempt. :-) (Now, “red lorry, yellow lorry” -- that's hard.)
2prase12y
You have an easy job compared to Koreans whose language doesn't have distinct phonemes for /r/ and /l/.
0A1987dM12y
Yeah, but my native language's /r/ is not quite the same as English /r/. (I have accidentally used the former when saying “where is” and been misunderstood by native speakers as saying “what is” as a result.)

I just wanted to add that despite ChrisHallquist's background in philosophy, the things he details in this article are very much up to date regarding our current knowledge of the brain.

I'm currently studying at Germany's #1 or #2 university with respect to the quality of their scientific education in psychology and I can vouch that I couldn't find a single mistake in his article while being quite familiar with everything he detailed.

The only thing I would emphasize or add is that there is indeed very good evidence that the brain can only develop correctly ... (read more)

The fact that damage to certain parts of the temporal lobe results in an inability to recognize objects contains an extremely important lesson. For most of us, recognizing objects requires no effort or thought as long as we can see the object clearly. Because it’s easy for us, it might be tempting to think it’s an inherently easy task, one that shouldn’t require hardly any brain matter to perform. Certainly it never occurred to me before I studied neuroscience that object recognition might require a special brain region. But it turns out that tasks that s

... (read more)

Taking the Solomonoff Induction route, making a highly compressible version of Windows is EXACTLY what it would mean to have "discovered deep insights into how to build powerful operating systems". Similarly, getting a lot of power out of simple designs is exactly what it means to have insights, at least in the context of Solomonoff Induction (and from there to science in general).

That said, great article. The contribution of complexity from the environment is a major issue, even as early as the womb, and that was definitely an important oversight.

You make a good point that the genome does not completely determine how the brain is set up. Environment is hugely influential in how things develop. I recently read that things the expression of our genes can be influence by things called transcription factors, as well as process called slicing and transposition. Each of these things is effected by the environment. For example, if your a small rat pup and your Mom licks you then this will trigger a cascade of hormones that will end up changing your DNA and your amygdala so that you release less stress hor... (read more)

0MaoShan12y
As was pointed out in the article, however, many parts of the brain's larger structure, finer wiring, and even the mechanisms for encoding gene expression including epigenetics (which we are only beginning to explore) are nearly identical between individuals of a species. Neuroscience and medical practice in general would be in sad shape of they didn't take advantage of the knowledge gained by these "erroneous" attempts. Knowing everything about one particular brain would only benefit the owner of that brain, while knowing a lot about the general workings can benefit many, including potential AI programmers.

What would be the consequence for someone suffering damage to the interpreter module of the brain?

I think I had read the argument that the complexity of the human genome is a upper bound to the "innate" part of the complexity of the human brain before (either somewhere on Language Log or in Motion Mountain by Christoph Schiller, IIRC).

(Of course, this assumes a narrower definition of innate than yours, because "the overwhelming majority of the conditions animals of a given species actually develop under" share lots of complexity. In particular, according to that definition linguistic universals are innate by definition, whether or n... (read more)

oops, wrong thread

[This comment is no longer endorsed by its author]Reply