You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

davidad comments on Whole Brain Emulation: Looking At Progress On C. elgans - Less Wrong Discussion

40 Post author: jkaufman 29 October 2011 03:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread. Show more comments above.

Comment author: davidad 31 October 2011 09:46:47AM 30 points [-]

That's me. In short form, my justification for working on such a project where many have failed before me is:

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

Comment author: gwern 31 October 2011 05:04:10PM 7 points [-]

In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

How would you nail those two predictions down into something I could register on PredictionBook.com?

Comment author: davidad 31 October 2011 05:43:34PM *  6 points [-]

"A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence

"A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

Comment author: gwern 31 October 2011 06:28:10PM 3 points [-]

Bleh, I see I was again unclear about what I meant by nailing down - more precisely, how would one judge whatever has been accomplished by 2014/2020 as being 'complete' or 'functional'? Frequently there are edge cases (there's this paper reporting one group's abandoned simulation which seemed complete oh except for this wave pattern didn't show up and they had to simplify that...). But since you were good enough to write them:

  1. http://predictionbook.com/predictions/4123
  2. http://predictionbook.com/predictions/4124
Comment author: davidad 02 November 2011 03:04:10AM 2 points [-]

Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I'm not trying to incentivize other people to do it for me, I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway. However, I'm open to arguments to the contrary.

Comment author: gwern 02 November 2011 03:45:59AM 0 points [-]

I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway.

Well, that's fine. I've make done with worse predictions than that.

Comment author: jkaufman 31 October 2011 06:40:23PM 0 points [-]

(Which paper are you referring to?)

Comment author: gwern 31 October 2011 07:24:26PM 2 points [-]

That was just a rhetorical example; I don't actually know what the edge cases will be in advance.

Comment author: JoshuaZ 31 October 2011 11:54:28PM *  4 points [-]

I'm curious where you'd estimate 50% chance of it existing and where you'd estimate 90%.

The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn't look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.

That high an estimate seems to be likely vulnerable to the planning fallacy.

Overall, your estimate seems to be too confident, the 2020 estimate especially so.

Comment author: davidad 02 November 2011 02:58:03AM 1 point [-]

I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.

Comment author: JoshuaZ 02 November 2011 03:04:24AM 1 point [-]

Unfortunately, being aware of the planning fallacy does not make it go away.

True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn't look like the planning fallacy is likely is one way to approach things.

Comment author: gwern 08 June 2014 03:33:28PM 2 points [-]

Thoughts on that first prediction?

Comment author: ciphergoth 13 April 2012 07:49:13AM 1 point [-]

99.8% confidence - can I bet with you at those odds?

Comment author: shminux 31 October 2011 05:39:45PM *  4 points [-]

I expect to be finished with C. elegans in 2-3 years.

How would you nail those two predictions down into something I could register on PredictionBook.com?

Given the wild unwarranted optimism an average PhD student has in the first year or two of their research, I would expect that David will have enough to graduate 5 or 6 years after he started, but the outcome will not be anywhere close to the original goal, thus

90% that "No whole brain emulation of C. elegans by 2015"

Then again, he is not your average PhD student (the youngest person to ever start a graduate program at MIT -- take that, Sheldon!), so I hope to be proven wrong.

Comment author: Sickle_eye 16 January 2012 02:27:30PM 2 points [-]

Ha, I'll keep an eye out for your publications. I'm particularly interested at the distance you'll have to go in gathering data, and what will you be able to make out of what is known. I expect that scans aiming for connectome description contain some neuron type data already due to morphological differences in neurons. I don't know what sets of sensors are used for those scans, but maybe getting a broader spectrum could provide clues as to what neuron types occupy which space inside the connectome. SEM can, after all, determine the chemical composition of materials, can't it?. As-is, this seems a pretty neckbreaking undertaking, although I wish you the best of luck.

In other news, there is, luckily, more and more work in this field: http://www.theverge.com/2011/11/16/2565638/mit-neural-connectivity-silicon-synapse

Predictions for silicon-based processors are pretty optimistic as well - Intel aims to achieve 10nm by 2014, and similar date is pushed by nVidia. Past that date we may see some major leaps in available technology (or not), and development of multi-processor computation algorithms is finally gaining momentum since Von Neumann's Big Mistake.

Maybe the Kurzweil's 2025 date for brain emulation is a bit overoptimistic, but I don't expect that to take much longer. I do think that the first dozen of successful neural structure emulations will become a significant breakthrough, and we'll see a rapid expansion similar to that in genetic sciences not so long ago.

Comment author: jkaufman 31 October 2011 11:58:35AM 2 points [-]

"Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data."

This suggests that even a full 5nm SEM imaging pass over the brain would not be enough information about the individual to emulate them.

Comment author: davidad 31 October 2011 05:37:47PM 9 points [-]

It's worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.

That said, given the current state of knowledge, I don't think there's good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we'll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don't even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it's hard to say with confidence that SEM will capture them.

However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it's hard to say what they'll look like yet.

Comment author: atucker 31 October 2011 04:28:27PM 0 points [-]

What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.

Am I hearing hints of Tononi here?

Comment author: davidad 31 October 2011 05:27:27PM 2 points [-]

It's fair to say that I am confident Tononi is on to something (although whether that thing deserves the label "consciousness" is a matter about which I am less confident). However, Tononi doesn't seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I'd expect to be necessary to get enough information for any sort of emulation.