You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ciphergoth comments on Whole Brain Emulation: Looking At Progress On C. elgans - Less Wrong Discussion

40 Post author: jkaufman 29 October 2011 03:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread.

Comment author: ciphergoth 31 October 2011 08:22:52AM 5 points [-]

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Unbounded Scales, Huge Jury Awards, & Futurism:

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, "How long will it be until we have human-level AI?" The responses I've seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, "Five hundred years." (!!)

Now the reason why time-to-AI is just not very predictable, is a long discussion in its own right. But it's not as if the guy who said "Five hundred years" was looking into the future to find out. And he can't have gotten the number using the standard bogus method with Moore's Law. So what did the number 500 mean?

As far as I can guess, it's as if I'd asked, "On a scale where zero is 'not difficult at all', how difficult does the AI problem feel to you?" If this were a bounded scale, every sane respondent would mark "extremely hard" at the right-hand end. Everything feels extremely hard when you don't know how to do it. But instead there's an unbounded scale with no standard modulus. So people just make up a number to represent "extremely difficult", which may come out as 50, 100, or even 500. Then they tack "years" on the end, and that's their futuristic prediction.

"How hard does the AI problem feel?" isn't the only substitutable question. Others respond as if I'd asked "How positive do you feel about AI?", only lower numbers mean more positive feelings, and then they also tack "years" on the end. But if these "time estimates" represent anything other than attitude expressions on an unbounded scale with no modulus, I have been unable to determine it.

Comment author: jkaufman 31 October 2011 11:51:57AM *  3 points [-]

My reasoning for saying hundreds of years was that this very simple subproblem has taken us over 25 years. Say we'll solve it in another ten. The amount of discovery and innovation needed to simulate a nematode seems maybe 1/100th as much as for a person. Naively this would say 100 * (25+10). More people would probably work on this if we had initial successes and it looked practical, though. Maybe this gives us a 10x boost? Which still is (100/10) * (25+10) or ~350 years.

Very wide error bars, though.

Comment author: orthonormal 28 March 2012 03:37:46PM 0 points [-]

You must have been very surprised by the progress pattern of the Human Genome Project, then. It's as if 90% of the real work was about developing the right methods rather than simply plugging along at the initial slow pace.

Comment author: jkaufman 28 March 2012 03:46:52PM *  0 points [-]

I'm not sure what you're responding to. I wasn't trying to say that the human brain was only 100x the size or complexity of a nematode's brain-like-thing. It's far larger and more complex than that. I was saying that even once we have a nematode simulated, we still have done only ~1% of the "real work" of developing the right methods.

Comment author: orthonormal 28 March 2012 03:48:23PM 2 points [-]

Even once we have a nematode simulated we still have done only ~1% of the "real work" of developing the right methods.

I understand that this is your intuition, but I haven't seen any good evidence for it.

Comment author: jkaufman 28 March 2012 04:03:57PM *  4 points [-]

The evidence I have that the methods developed for the nematode are dramatically insufficient to apply to people:

  • nematodes are transparent
  • they're thin and so easy to get chemicals to all of them at once
  • their inputs and outputs are small enough to fully characterize
  • their neural structure doesn't change at runtime
  • while they do learn, they don't learn very much

It's not strong evidence, I agree. I'd like to get a better estimate here.

Comment author: orthonormal 13 April 2012 05:06:04PM 2 points [-]

This lecture on uploading C. elegans is very relevant.

(In short, biophysicists have known where the neurons are located for a long time, but they've only just recently developed the ability to analyze the way they affect one another, and so there's fresh hope of "solving" the worm's brain. The new methods are also pretty awesome.)

Comment author: orthonormal 28 March 2012 04:10:08PM 2 points [-]

My intuition is that most of the difficulty comes from the complexity of the individual cells- we don't understand nearly all of the relevant things they do that affect neural firing. This is basically independent of how many neurons there are or how they're wired, so I expect that correctly emulating a nematode brain would only happen when we're quite close to emulating larger brains.

If the "complicated wiring" problem were the biggest hurdle, then you'd expect a long gap between emulating a nematode and emulating a human.