Linky. Quotes:

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

[...]

  • We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
  • The oldest human skills are largely unconscious and so appear to us to be effortless.
  • Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 3:44 AM

... low-level sensorimotor skills require enormous computational resources.

I think this fails to make a necessary distinction between tasks that are algorithmically neat (but computation intensive) and tasks that are algorithmically scruffy (but modest in usage of computational resources).

We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.

My intutitions are different (or perhaps I am just interpreting differently). The core neural algorithms of vertebrate locomotion are pretty much the same in snakes, lizards, ostriches, eagles, bats, dolphins, elephants, cats, kangaroos, and tri-athletes. The fact that these algorithms and architecture have been so successfully retargeted to such a variety of gaits and grasping appendages suggests to me that there is an elegant core organization. And that once you understand how it works in one of these beasts, it should be pretty easy to figure out how it works in another.

The fact that these algorithms and architecture have been so successfully retargeted to such a variety of gaits and grasping appendages suggests to me that there is an elegant core organization.

I don't think the rest of biology supports the thesis that success in varied environments implies a simple and elegant core. And it doesn't seem to follow from standard Occam's razor either. Are the shortest algorithms the most efficient or the most retargetable? This doesn't sound right at all. But then again, I don't know much about biology, please set me right if I'm wrong.

"Elegant" was probably the wrong word. "Modular" is better. I think that biology does support the idea that reuse in biology is most successful when what is reused is a "module" with some of the same virtues that make software modules reusable and retargetable - they are naturally parameterized, they are strongly coherent, and they have loose coupling to other subsystems.

Examples in biology are the reuse of the core genetic machinery of translation, transcription, and replication, as well as the core metabolism of biochemistry. And in development, we have the HOX genes and the rest of the evo-devo toolkit discussed by, for example, Kirschner and Gerhart. These systems are not exactly 'simple', but neither are they needlessly baroque.

I'm guessing that these same principles of system reuse apply to animal locomotion, though I admit that I know a lot less about anatomy and neuroscience than I do about biochemistry and molecular biology.

The fact that it's retargetable and evolved rather than designed seems to suggest elegance. I can imagine evolution producing a moderately complex narrow adaptation, or a simple one that as a side effect of its simplicity happens to generalize, but I don't see how evolution could produce a complex generalizing adaptation.

The core neural algorithms of vertebrate locomotion

Would you mind tabooing algorithm?

Would you mind tabooing algorithm?

A reasonable request. The best substitute word I can come up with is "architecture". Or perhaps "technology". I have in mind a grab-bag of tricks for synchronization, short-loop feedback, redundancy, phase locking, etc., which I think that Nature has reused over and over.

I think "building blocks" would have got your point across clearly to me.

Looking at some of your examples, I'm not sure that having common building blocks makes it easy to reverse engineer and replicate in another substrate. Consider making a genetic system based on phosphorous and nitrogen rather than carbon. Despite understanding the modular nature of genes and how they are stored it would be distinctly non-trivial to find P/N based analogues of DNA and the various proteins used in the transcription of that analogue.

My biochemical examples were intended to make a point about evolution, rather than a point about reverse-engineering. I would assume that if we decide to reverse-engineer biological models in neuroscience, the new artificial neurons will not be built out of genetically encoded polymers. Though I might be wrong.

My point was that while there is a an elegant core organization to protein production, there is the confounding factor of protein folding making it hard to reverse engineer.

So there might be similar confounding factors in animal locomotion. Confounding factors are those that take lots of information from various sources that update frequently. For example atom movement relies on the position of the other atoms around it. Neurons have the potential for these sorts of factors in electric field generation, chemical gradients and many connections from other neurons.

How about “control system”? (Still atomic, but more specific.)

But it is not a control system in the sense that say a thermostat is a control system. I can't put my brain into a whale and expect to be able to swim as it does immediately, I might be able to learn over time. I can however switch out pretty much similar thermostats and there not been any difference in operation.

So in what sense are mine and the whales brains approaches to locomotion the same? I do grant there is a similarity, but saying that they have the same algorithm or control system seems to ignore lots of detail.

Control theory defines a control system in a way that covers thermostats and whale-controllers alike. One of the more interesting research topics is figuring out how to make controllers determine the properties of the system they're controlling.

You see this especially in control systems for durable robotics: there are some robots which are designed to determine experimentally what the effects of moving their appendages around will be, and to derive from that a gait that will allow them to walk. If their model of the system later starts failing because they lost a leg or turned into a whale or something, they'll update their idea of what they're controlling. Control systems are surprisingly deep, and damn interesting.

(This is getting way off topic, but in general, loads of things are surprisingly deep if you look at them hard enough. I once found a book on the shelf of a library called "The grain supply of the Roman Empire", and my jaw dropped. Surely, I thought, this must be the most boring book ever written. Obviously, I had to take a look at it. As it turned out, that was a pretty interesting subject. It touched on everything from preventing mold to the tricks used to abuse an ancient welfare system. Go figure.)

Interesting, most of my exposure to control systems has been through PID controllers and the like. Wikipedia provides a good caricature of this view. Any good terms to google for the more cutting edge stuff?

[-]sark13y00

The fact that these algorithms and architecture have been so successfully retargeted to such a variety of gaits and grasping suggests to me that there is an elegant core organization.

Or it could simply be that evolution uses what was already there. Better examples for your claim would be traits in unrelated species which demonstrate convergent evolution.

I suspect you misunderstand my argument, and I definitely fail to understand yours. How would convergent evolution support the claim you quoted?

... it could simply be that evolution uses what was already there.

I was assuming that. But I was also assuming something that may not be true. I was assuming that (back in the Cambrian) there were a variety of neural architectures for locomotion "already there". And that beasties using each of those architectures were offered the opportunity to adaptively radiate into new niches requiring new forms of locomotion. And that some succeeded in adapting and some didn't. And that the ones that succeeded did so because they had an 'elegant' or 'modular' neural architecture.

In some sense, this is selection for ability to evolve. This is something of a controversial idea in evolutionary theory, but in the form I have presented it, it shouldn't be very controversial.

[-]sark13y00

I suspect you misunderstand my argument

Were you claiming that those neural algorithms are thus algorithmically efficient/optimal?

I was assuming that (back in the Cambrian) there were a variety of neural architectures for locomotion "already there".

Ah yes, I didn't know you were assuming that. Unfortunately that doesn't seem very plausible to me. It seems more likely that the locomotion algorithm we see in many species today was a once-in-natural-history luck-out on the part of evolution. Even assuming rival algorithms appeared on the scene, I suspect they did so rarely enough that the first which appeared had such a head start that those which came later could not compete. By simple natural selection, instead of via evolvability. Also likely is that once the first algorithm appeared, it was immediately built upon, such that later algorithms messed other stuff up and cost too much fitness relative to their improvements.

And that the ones that succeeded did so because they had an 'elegant' or 'modular' neural architecture.

Perhaps I did misunderstand you, and you were claiming modularity, but not necessarily internal optimality? But then how would modularity relate to the OP? I guess it is one aspect of being efficient. But weren't we comparing various modules for 'difficulty of replication in computers'?

I have no problem with evolvability in general. In fact it's a favorite of mine :)

So Artificial Hypocrisy should be pretty easy then ;-)

Wouldn't that be an unconscious sort of skill?

I had an AI professor, whose explanation for this phenomenon was that the "low-level skills" are the ones that come "pre-installed" on human brains and as a result no one bothered figuring out explicitly how they worked until computers came along; whereas the "high-level skills" are the ones we've spent the last millennium or more working out explicitly.

At first glance this seemed intuitive and obvious. But upon consideration it seems like the processing involved in unstructured learning would be far more computationally intensive than low level sensorimotor skills, even once you have worked out how to do it. Finding statistical relationships between large numbers of concepts and sensory inputs just doesn't scale well compared to doing a bunch of calculus to predict an control movement of the body. Much of our high level reasoning relies on this system - not just on shuffling things around in 7 slots of working memory.

There is something to the paradox but you have to be rather careful when describing the distinction. "high-level reasoning vs low-level sensorimotor skills" may not be the best way to look at the distinction. "Conscious vs unconscious" is somewhat closer. It is also worth noting that even among skills that haven't evolved over hundreds of thousands of years in animals - the ones we learn with 10 years of solid practice - it is the unconscious skills that use the most computational resources.

Finding statistical relationships between large numbers of concepts and sensory inputs just doesn't scale well compared to doing a bunch of calculus to predict an control movement of the body.

Really? Correlation is just multiplying and adding, which both scale well and parallelize trivially. Scaling the calculus required to control pendulums is far less pretty- going from 1 to 2 makes the problem chaotic, and I imagine 3 is even less pretty. This looks like a common problem to give students, so the double pendulum can't be that difficult. My feeling, though, is that the toughest solved sensorimotor problems are a lot 'easier' than the toughest solved machine learning problems.