Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Whole Brain Emulation: Looking At Progress On C. elgans

39 Post author: jkaufman 29 October 2011 03:21PM

Being able to treat the pattern of someone's brain as software to be run on a computer, perhaps in parallel or at a large speedup, would have a huge impact, both socially and economically.  Robin Hanson thinks it is the most likely route to artificial intelligence.  Anders Sandberg and Nick Bostrom of the Future Of Humanity Institute created out a roadmap for whole brain emulation in 2008, which covers a huge amount of research in this direction, combined with some scale analysis of the difficulty of various tasks.

Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress.  Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system.  It is extremely well studied and well understood, having gone through heavy use as a research animal for decades.  Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans.  At 302 neurons, simulation has been within our computational capacity for at least that long.  With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?

Reading through the research, there's been some work on modeling subsystems and components, but I only find three projects that have tried to integrate this research into a complete simulation: the University of Oregon's NemaSys (~1997), the Perfect C. elegans Project (~1998), and Hiroshima University's Virtual C. Elegans project (~2004).  The second two don't have web pages, but they did put out papers: [1], [2], [3].

Another way to look at this is to list the researchers who seem to have been involved with C. elegans emulation.  I find:

  • Hiroaki Kitano, Sony [1]
  • Shugo Hamahashi, Keio University [1]
  • Sean Luke, University of Maryland [1]
  • Michiyo Suzuki, Hiroshima University  [2][3]
  • Takeshi Goto, Hiroshima Univeristy [2]
  • Toshio Tsuji, Hiroshima Univeristy [2][3]
  • Hisao Ohtake, Hiroshima Univeristy [2]
  • Thomas Ferree, University of Oregon [4][5][6][7]
  • Ben Marcotte, University of Oregon [5]
  • Sean Lockery, University of Oregon [4][5][6][7]
  • Thomas Morse, University of Oregon [4]
  • Stephen Wicks, University of British Columbia [8]
  • Chris Roehrig, University of British Columbia [8]
  • Catharine Rankin, University of British Columbia [8]
  • Angelo Cangelosi, Rome Instituite of Psychology [9]
  • Domenico Parisi, Rome Instituite of Psychology [9]

This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on.  None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources.  I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly.  While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Note: I later reorganized this into a blog post, incorporating some feed back from these comments.

Papers:

[1] The Perfect C. elegans Project: An Initial Report (1998)

[2] A Dynamic Body Model of the Nematode C. elegans With Neural Oscillators (2005)

[3] A model of motor control of the nematode C. elegans with neuronal circuits (2005)

[4] Robust spacial navigation in a robot inspired by C. elegans (1998)

[5] Neural network models of chemotaxis in the nematode C. elegans (1997)

[6] Chemotaxis control by linear recurrent networks (1998)

[7] Computational rules for chemotaxis in the nematode C. elegans (1999)

[8] A Dynamic Network Simulation of the Nematode Tap Withdrawl Circuit: Predictions Concerning Synaptic Function Using Behavioral Criteria (1996)

[9] A Neural Network Model of Caenorhabditis Elegans: The Circuit of Touch Sensitivity (1997)

Comments (76)

Comment author: slarson 01 November 2011 06:12:27PM *  9 points [-]

Hi all,

Glad there's excitement on this subject. I'm currently coordinating an open source project whose goal is to do a full simulation of the c. elegans (http://openworm.googlecode.com). More on that in a minute.

If you are surveying past c. elegans simulation efforts, you should be sure not to leave out the following:

A Biologically Accurate 3D Model of the Locomotion of Caenorhabditis Elegans, Roger Mailler, U. Tulsa http://j.mp/toeAR8

C. Elegans Locomotion: An integrated Approach -- Jordan Boyle, U. Leeds http://j.mp/fqKPEw

Back to Open Worm. We've just published a structural model of all 302 neurons (http://code.google.com/p/openworm/wiki/CElegansNeuroML) represented as NeuroML (http://neuroml.org). NeuroML allows the representation of multi-compartmental models of neurons (http://en.wikipedia.org/wiki/Biological_neuron_models#Compartmental_models). We are using this as a foundation to overlay the c. elegans connectivity graph and then add as much as we can find about the biophysics of the neurons. We believe this represents the first open source attempt to reverse-engineer the c. elegans connectome.

One of the comments mentioned Andrey Palyanov's mechanical model of the c. elegans. He is part of our group and is currently focused on moving to a soft-body simulation framework rather than the rigid one they created here: http://www.youtube.com/watch?feature=player_embedded&v=3uV3yTmUlgo Our first goal is to combine the neuronal model with this physical model in order to go beyond the biophysical realism that has already been done in previous studies. The physical model will then serve as the "read out" to make sure that the neurons are doing appropriate things.

Our roadmap for the project is available here: http://code.google.com/p/openworm/wiki/Roadmap

We have a mailing list here: http://groups.google.com/group/openworm

We have regular meetings on Google+ Hangout. If you want to help, we can surely find a way to include you. If you are interested, please let us know and we'll loop you in.

Cheers, Stephen

Comment author: turchin 29 October 2011 04:37:55PM 8 points [-]

http://www.computerra.ru/interactive/589824 A. Palianov now works in Russia on nematode brain emulation project

Comment author: Matvey_Ezhov 31 October 2011 04:10:35PM 0 points [-]
Comment author: multifoliaterose 30 October 2011 04:21:36PM *  6 points [-]

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Does this assessment take into account the possibility of intermediate acceleration of human cognition?

Comment author: jkaufman 31 October 2011 02:08:11PM 1 point [-]

It doesn't.

Comment author: jkaufman 01 November 2011 05:37:40PM 5 points [-]

I wrote to Ken Hayworth who is a neuroscience researcher working on scanning and interested in whole brain emulation, and he wrote back:

I have not read much on the simulation efforts on C. elegans but I have talked several times to one of the chief scientists who collected the original connectome data and has been continuing to collect more electron micrographs (David Hall, in charge of www.wormatlas.org). He has said that the physiological data on neuron and synapse function in C. elegans is really limited and suggests that no one spend time simulating the worm using the existing datasets because of this. I.e. we may know the connectivity but we don't know even the sign of many synapses.

If you look at a system like the retina I would argue that we already have quite good models of its functioning and thus it is a perfect ground for testing emulation from known connectivity.

So the short answer is that I think it may be far easier to emulate a well characterized and mapped part of the mammalian brain than it is to emulate the worm despite its smaller size.

Comment author: jkaufman 01 November 2011 06:22:36PM 6 points [-]

Further exchange:

Me:

So even a nanoscale SEM pass over the whole brain wouldn't be enough unless we could find some way to visually read off the sign of a synapse, perhaps with a stain, perhaps by learning what different types of neurons look like, perhaps by something not yet discovered?

Hayworth:

That is right, but those tell tale signs are well known for certain systems (like the retina) already, and will become more clear for others once large scale em imaging combined with functional recording becomes routine.

Comment author: slarson 01 November 2011 09:14:34PM *  5 points [-]

I would respectfully disagree with Dr. Hayworth.

I would challenge him to show a "well characterized and mapped out part of the mammalian brain" that has a fraction of the detail that is known in c. elegans already. Moreover, the prospect of building a simulation requires that you can constrain the inputs and the outputs to the simulation. While this is a hard problem in c. elegans, its orders of magnitude more difficult to do well in a mammalian system.

There is still no retina connectome to work with (c. elegans has it). There are debates about cell types in retina (c. elegans has unique names for all cells). The gene expression maps of retina are not registered into a common space (c. elegans has that). The ability to do calcium imaging in retina is expensive (orders of magnitude easier in c. elegans). Genetic manipulation in mouse retina is expensive and takes months to produce specific mutants (you can feed c. elegans RNAi and make a mutant immediately).

There are methods now, along the lines of GFP (http://en.wikipedia.org/wiki/Green_fluorescent_protein) to "read the signs of synapses". There is just very little funding interest from Government funding agencies to apply them to c. elegans. David Hall is one of the few who is pushing this kind of mapping work in c. elegans forward.

What confuses this debate is that unless you study neuroscience deeply it is hard to tell the "known unknowns" apart from the "unknown unknowns". Biology isn't solved, so there are a lot of "unknown unknowns". Even with that, there are plenty of funded efforts in biology and neuroscience to do simulations. However, in c. elegans there are likely to be many fewer "unknown unknowns" because we have a lot more comprehensive data about its biology than we do for any other species.

Building simulations of biological systems helps to assemble what you know, but can also allow you to rationally work with the "known unknowns". The "signs of synapses" is an example of known unknowns -- we can fit those into a simulation engine without precise answers today and fill them in tomorrow. The statement that no one should start simulating the worm based on the current data has no merit when you consider that there is a lot to be done just to get to a framework that has the capacity to organize the "known unknowns" so that we can actually do something useful with them once they have them. More importantly, it makes the gaps a lot more clear. Right now, in the absence of any c. elegans simulations, data are being generated without a focused purpose of feeding into a global computational framework of understanding c. elegans behavior. I would argue that the field would be much better off collecting data in the context of adding to the gaps of a simulation, rather than everyone working at cross purposes.

That's why we are working on this challenge of building not just a c. elegans simulations, but a general framework for doing so, over at the Open Worm project (http://openworm.googlecode.com).

Comment author: atucker 30 October 2011 04:06:38AM 11 points [-]

David Dalrymple is also trying to emulate all of C. elegans, and was at the Singularity Summit.

http://syntheticneurobiology.org/people/display/144/26

Comment author: davidad 31 October 2011 09:46:47AM 29 points [-]

That's me. In short form, my justification for working on such a project where many have failed before me is:

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

Comment author: gwern 31 October 2011 05:04:10PM 7 points [-]

In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

How would you nail those two predictions down into something I could register on PredictionBook.com?

Comment author: davidad 31 October 2011 05:43:34PM *  6 points [-]

"A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence

"A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

Comment author: gwern 31 October 2011 06:28:10PM 3 points [-]

Bleh, I see I was again unclear about what I meant by nailing down - more precisely, how would one judge whatever has been accomplished by 2014/2020 as being 'complete' or 'functional'? Frequently there are edge cases (there's this paper reporting one group's abandoned simulation which seemed complete oh except for this wave pattern didn't show up and they had to simplify that...). But since you were good enough to write them:

  1. http://predictionbook.com/predictions/4123
  2. http://predictionbook.com/predictions/4124
Comment author: davidad 02 November 2011 03:04:10AM 2 points [-]

Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I'm not trying to incentivize other people to do it for me, I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway. However, I'm open to arguments to the contrary.

Comment author: gwern 02 November 2011 03:45:59AM 0 points [-]

I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway.

Well, that's fine. I've make done with worse predictions than that.

Comment author: jkaufman 31 October 2011 06:40:23PM 0 points [-]

(Which paper are you referring to?)

Comment author: gwern 31 October 2011 07:24:26PM 2 points [-]

That was just a rhetorical example; I don't actually know what the edge cases will be in advance.

Comment author: JoshuaZ 31 October 2011 11:54:28PM *  4 points [-]

I'm curious where you'd estimate 50% chance of it existing and where you'd estimate 90%.

The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn't look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.

That high an estimate seems to be likely vulnerable to the planning fallacy.

Overall, your estimate seems to be too confident, the 2020 estimate especially so.

Comment author: davidad 02 November 2011 02:58:03AM 1 point [-]

I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.

Comment author: JoshuaZ 02 November 2011 03:04:24AM 1 point [-]

Unfortunately, being aware of the planning fallacy does not make it go away.

True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn't look like the planning fallacy is likely is one way to approach things.

Comment author: ciphergoth 13 April 2012 07:49:13AM 1 point [-]

99.8% confidence - can I bet with you at those odds?

Comment author: shminux 31 October 2011 05:39:45PM *  4 points [-]

I expect to be finished with C. elegans in 2-3 years.

How would you nail those two predictions down into something I could register on PredictionBook.com?

Given the wild unwarranted optimism an average PhD student has in the first year or two of their research, I would expect that David will have enough to graduate 5 or 6 years after he started, but the outcome will not be anywhere close to the original goal, thus

90% that "No whole brain emulation of C. elegans by 2015"

Then again, he is not your average PhD student (the youngest person to ever start a graduate program at MIT -- take that, Sheldon!), so I hope to be proven wrong.

Comment author: Sickle_eye 16 January 2012 02:27:30PM 2 points [-]

Ha, I'll keep an eye out for your publications. I'm particularly interested at the distance you'll have to go in gathering data, and what will you be able to make out of what is known. I expect that scans aiming for connectome description contain some neuron type data already due to morphological differences in neurons. I don't know what sets of sensors are used for those scans, but maybe getting a broader spectrum could provide clues as to what neuron types occupy which space inside the connectome. SEM can, after all, determine the chemical composition of materials, can't it?. As-is, this seems a pretty neckbreaking undertaking, although I wish you the best of luck.

In other news, there is, luckily, more and more work in this field: http://www.theverge.com/2011/11/16/2565638/mit-neural-connectivity-silicon-synapse

Predictions for silicon-based processors are pretty optimistic as well - Intel aims to achieve 10nm by 2014, and similar date is pushed by nVidia. Past that date we may see some major leaps in available technology (or not), and development of multi-processor computation algorithms is finally gaining momentum since Von Neumann's Big Mistake.

Maybe the Kurzweil's 2025 date for brain emulation is a bit overoptimistic, but I don't expect that to take much longer. I do think that the first dozen of successful neural structure emulations will become a significant breakthrough, and we'll see a rapid expansion similar to that in genetic sciences not so long ago.

Comment author: jkaufman 31 October 2011 11:58:35AM 2 points [-]

"Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data."

This suggests that even a full 5nm SEM imaging pass over the brain would not be enough information about the individual to emulate them.

Comment author: davidad 31 October 2011 05:37:47PM 9 points [-]

It's worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.

That said, given the current state of knowledge, I don't think there's good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we'll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don't even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it's hard to say with confidence that SEM will capture them.

However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it's hard to say what they'll look like yet.

Comment author: atucker 31 October 2011 04:28:27PM 1 point [-]

What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.

Am I hearing hints of Tononi here?

Comment author: davidad 31 October 2011 05:27:27PM 2 points [-]

It's fair to say that I am confident Tononi is on to something (although whether that thing deserves the label "consciousness" is a matter about which I am less confident). However, Tononi doesn't seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I'd expect to be necessary to get enough information for any sort of emulation.

Comment author: Lapsed_Lurker 29 October 2011 10:25:31PM 4 points [-]

How well can a single neuron or a few neurons be simulated? If we have good working models of those, which behave as we see in life, then that means WBE might be harder, if no such models yet exist, then the failures to model a 302-neuron system are not such good evidence for difficulty.

Comment author: Douglas_Knight 29 October 2011 11:57:45PM 7 points [-]

There are many models of neurons, at many levels of detail. I think that the Neuron program uses the finest detail of any existing software.

I see the primary purpose of a simulating a nematode as measuring how well such models actually work. If they do work, it also lets us estimate the amount of detail needed, but the first question is whether these models are biologically realistic. An easier task would be to test whether the models accurately describe a bunch of neurons in a petri dish. The drawback of such an approach is that it is not clear what it would mean for a model to be adequate for that purpose, whereas in a organism we know what constitutes biologically meaningless noise. Also, realistic networks probably suppress certain kinds of noise.

Comment author: Lapsed_Lurker 30 October 2011 12:15:49AM 1 point [-]

When I googled for information on neuron emulation, that site came up as the first hit. I've used the search box to look for 'elegans' and 'nematode' - both 0 hits, so I figure no-one is discussing that stuff on their forum.

Comment author: slarson 01 November 2011 11:23:22PM *  1 point [-]

There is a good review of strategies for building computational models of neurons here:

http://www.ncbi.nlm.nih.gov/pubmed/17629781

Comment author: Risto_Saarelma 30 October 2011 01:15:29AM 9 points [-]

Maybe a more troubling situation for the feasibility of human brain emulation would be if we had had nematode emulation working for a decade or more but had made no apparent headway to emulating the next level of still not very impressive neural complexity, like a snail. At the moment there's still the possibility we're just missing some kind of methodological breakthrough, and once that's achieved there's going to be a massive push towards quickly developing emulations for more complex animals.

Comment author: slarson 01 November 2011 09:29:59PM 2 points [-]

I think you are right on. I would extend your comment a bit which is to say we are not just missing a methodological breakthrough, but we are not even really attempting to develop the methods necessary. The problem is not just scientific but also what is considered to be science that is worth funding.

Comment author: Douglas_Knight 29 October 2011 11:34:57PM 7 points [-]

Are these projects about emulation? The Oregon and Rome projects seem to treat the brain as a black box, rather than taking advantage of Brenner's connectome. I'm not sure about the others. That doesn't tell us much about the difficulty of emulation, except that they thought their projects were easier.

Brenner's connectome is not enough information. At the very least, you need to know whether synapses are exciting or inhibiting. This pretty much needs to be measured, which is rather different than what Brenner did. It might not require a lot of measurement: once you've measured a few, maybe you can recognize the others. Or maybe not.

Comment author: jkaufman 30 October 2011 03:05:35AM 5 points [-]

The oregon one looks to me like it was about emulation: each of the 302 neurons will be implemented according to available anatomical and physiological data.

The rome one I think you may be right.

Is the nematode too small to measure whether synapses are exciting on inhibiting?

Comment author: Douglas_Knight 30 October 2011 03:14:30AM 2 points [-]

I was basing my judgement on the Oregon papers. I suppose that there may be emulation attempts lurking behind other non-emulation papers.

Comment author: jkaufman 30 October 2011 03:54:00AM 1 point [-]

It's also possible they only proposed to do emulation, but never got funded.

Comment author: Pfft 30 October 2011 09:55:07PM 3 points [-]

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

What kind of reasoning leads you to this time estimate? Hundreds of years is an awfully long time -- consider that two hundred years ago nobody even knew that cells existed, and there didn't exist any kind of computers.

From your description of the state of the field, I guess we won't see an uploaded nematode very soon, but getting there in a decade or two doesn't seem impossible. It seems a bit counter-intuitive to me that learning "no nematode know, but maybe in ten years" would move the point estimate for human uploads by several centuries. Because, what if we had happened to do this literature survey ten years later, and found out that indeed nematodes had been successfully uploaded? If the estimate is sensitive to very small changes like that, it must be very uncertain.

Comment author: Logos01 31 October 2011 09:04:34AM 4 points [-]

What kind of reasoning leads you to this time estimate? Hundreds of years is an awfully long time

Humans are notoriously poor at providing estimates of probability, and our ability to accurately predict scales that are less than immediate are just as poor. It seems likely that this "hundreds of years" was a short-hand for "there does not seem to be a direct roadmap to achieving this goal from where we currently are, and therefore I must assign an arbitrarily distant point into the future as its most-likely-to-be-achieved date."

This is purely guesswork / projection on my part, however.

Comment author: ciphergoth 31 October 2011 08:22:52AM 5 points [-]

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Unbounded Scales, Huge Jury Awards, & Futurism:

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, "How long will it be until we have human-level AI?" The responses I've seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, "Five hundred years." (!!)

Now the reason why time-to-AI is just not very predictable, is a long discussion in its own right. But it's not as if the guy who said "Five hundred years" was looking into the future to find out. And he can't have gotten the number using the standard bogus method with Moore's Law. So what did the number 500 mean?

As far as I can guess, it's as if I'd asked, "On a scale where zero is 'not difficult at all', how difficult does the AI problem feel to you?" If this were a bounded scale, every sane respondent would mark "extremely hard" at the right-hand end. Everything feels extremely hard when you don't know how to do it. But instead there's an unbounded scale with no standard modulus. So people just make up a number to represent "extremely difficult", which may come out as 50, 100, or even 500. Then they tack "years" on the end, and that's their futuristic prediction.

"How hard does the AI problem feel?" isn't the only substitutable question. Others respond as if I'd asked "How positive do you feel about AI?", only lower numbers mean more positive feelings, and then they also tack "years" on the end. But if these "time estimates" represent anything other than attitude expressions on an unbounded scale with no modulus, I have been unable to determine it.

Comment author: jkaufman 31 October 2011 11:51:57AM *  2 points [-]

My reasoning for saying hundreds of years was that this very simple subproblem has taken us over 25 years. Say we'll solve it in another ten. The amount of discovery and innovation needed to simulate a nematode seems maybe 1/100th as much as for a person. Naively this would say 100 * (25+10). More people would probably work on this if we had initial successes and it looked practical, though. Maybe this gives us a 10x boost? Which still is (100/10) * (25+10) or ~350 years.

Very wide error bars, though.

Comment author: orthonormal 28 March 2012 03:37:46PM 0 points [-]

You must have been very surprised by the progress pattern of the Human Genome Project, then. It's as if 90% of the real work was about developing the right methods rather than simply plugging along at the initial slow pace.

Comment author: jkaufman 28 March 2012 03:46:52PM *  0 points [-]

I'm not sure what you're responding to. I wasn't trying to say that the human brain was only 100x the size or complexity of a nematode's brain-like-thing. It's far larger and more complex than that. I was saying that even once we have a nematode simulated, we still have done only ~1% of the "real work" of developing the right methods.

Comment author: orthonormal 28 March 2012 03:48:23PM 2 points [-]

Even once we have a nematode simulated we still have done only ~1% of the "real work" of developing the right methods.

I understand that this is your intuition, but I haven't seen any good evidence for it.

Comment author: jkaufman 28 March 2012 04:03:57PM *  3 points [-]

The evidence I have that the methods developed for the nematode are dramatically insufficient to apply to people:

  • nematodes are transparent
  • they're thin and so easy to get chemicals to all of them at once
  • their inputs and outputs are small enough to fully characterize
  • their neural structure doesn't change at runtime
  • while they do learn, they don't learn very much

It's not strong evidence, I agree. I'd like to get a better estimate here.

Comment author: orthonormal 13 April 2012 05:06:04PM 2 points [-]

This lecture on uploading C. elegans is very relevant.

(In short, biophysicists have known where the neurons are located for a long time, but they've only just recently developed the ability to analyze the way they affect one another, and so there's fresh hope of "solving" the worm's brain. The new methods are also pretty awesome.)

Comment author: orthonormal 28 March 2012 04:10:08PM 2 points [-]

My intuition is that most of the difficulty comes from the complexity of the individual cells- we don't understand nearly all of the relevant things they do that affect neural firing. This is basically independent of how many neurons there are or how they're wired, so I expect that correctly emulating a nematode brain would only happen when we're quite close to emulating larger brains.

If the "complicated wiring" problem were the biggest hurdle, then you'd expect a long gap between emulating a nematode and emulating a human.

Comment author: Humbug 29 October 2011 07:25:13PM *  4 points [-]

None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.

Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.

Comment author: jkaufman 03 November 2011 11:53:52AM 2 points [-]

I've reorganized this into a blog post incorporating what I've learned in the comments here.

Comment author: Douglas_Knight 10 November 2011 09:16:15PM 1 point [-]

Could you be explicit about what you learned? I can't tell from comparing the two posts.

Comment author: jkaufman 10 November 2011 09:55:35PM 0 points [-]

Most of the blog post version is just reorganization and context for a different audience, but there are some changes reflecting learning about who is working on this. Specifically, I didn't know before about the OpenWorm project, Stephen Larson, David Dalrymple, or the 2009 and 2010 body model papers. While I think in a few years I'll be able to update my predictions based on their experiences, this new information about people currently working on the project didn't affect my understanding of how difficult or far away nematode simulation or WBE is.

Comment author: Hyena 29 October 2011 05:51:45PM 2 points [-]

This depends on whether the problem is the basic complexity of modeling a neural network or learning how to do it. If the former, then we may be looking at a long time. But if it's the latter, then we really just need more attempts, successful or not, to learn from and a framework which allows a leap in understanding could arrive.

Comment author: Logos01 31 October 2011 09:08:28AM 0 points [-]

But if it's the latter, then we really just need more attempts,

I don't know that repeatedly doing the wrong thing will help inform us how to do the right thing. This seems counterfactual to me. Certainly it informs us what the wrong thing is, but... without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won't lead us to deeper insights as to what we don't. The question becomes: how do we discern that missing information?

Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to 'knowing enough for government work'.

Comment author: Hyena 31 October 2011 03:22:42PM 3 points [-]

Everything that fails does for a reason and in a way. In engineering, mere bugs aside, everything fails at the frontier of our knowledge and our failures carry information about the shape of that frontier back to us. We learn what problems need to be overcome and can, with many failures, generalize what the overall frontier is like, connect its problems and create concepts which solve many at once.

Comment author: Logos01 31 October 2011 05:32:19PM *  0 points [-]

Everything that fails does for a reason and in a way.

Oh, absolutely. But if they keep failing for the same reason and in the same way, re-running the simulations doesn't get you any unique or novel information. It only reinforces what you already know.

I acknowledged this as I said, "Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to 'knowing enough for government work'."

Comment author: Hyena 31 October 2011 08:08:18PM 2 points [-]

I think the problem here is that you think that each instance of a simulation is actually an "attempt". A simulation is a model of some behavior; unlike climbing Everest (which I did in 2003), taming Pegasus (in -642) or repelling the Golden Horde (1257 - 1324, when I was called away on urgent business in Stockholm), each run of a model is a trial, not an attempt. Each iteration of the model is an attempt, as is each new model.

We need more attempts. We learn something different from each one.

Comment author: Logos01 01 November 2011 05:00:09AM 0 points [-]

I think the problem here is that you think that each instance of a simulation is actually an "attempt".

No, the problem here is more that I don't believe that it is any longer feasible to run a simulation and attempt to extract new information without direct observation of the simulated subject-matter.

We need more attempts. We learn something different from each one.

Yes, absolutely. But I don't believe we can do anything other than repeat the past by building models based on modeled output without direct observation at this time.

Comment author: Hyena 01 November 2011 01:45:15PM *  -1 points [-]

So why not just say "to clarify, I believe that we do not have enough knowledge of C. elegans' neuroanatomy to build new models at this time. We need to devote more work to studying that before we can build newer models"? That's a perfectly valid objection, but it contradicts your original post, which states that C. elegans is well understood neurologically.

If you believe that we cannot build effective models "without [additional] direct observation", then you have done two things: you've objected to the consensus that C. elegans is well understood and provided a criterion (and effective upload model of its neuroanatomy) for judging how well we understand.

Comment author: Logos01 01 November 2011 06:14:20PM 0 points [-]

That's a perfectly valid objection, but it contradicts your original post, which states that C. elegans is well understood neurologically.

My original post stated, "without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won't lead us to deeper insights as to what we don't."

Your assertion (in-line quoted, this comment) is false. I said what I meant the first time 'round: we don't know enough about how neurons work yet and without that understanding any models we build now won't yield us any new insights into how they do.

This, furthermore, has nothing to do with C. elegans in specific.

you've objected to the consensus that C. elegans is well understood and provided a criterion (and effective upload model of its neuroanatomy) for judging how well we understand.

Since the goal of these models is to emulate the behavior of C. elegans, and the models do not yet do this, it is clear that one of two things is true: either we do not understand C. elegans or we do not understand neurobiology sufficiently to achieve this goal.

I have made my assertion as to which this is, I have done so quite explicitly, and I have been consistent and clear in this from my first post in this thread.

So where's the confusion?

Comment author: Hyena 01 November 2011 09:35:57PM 0 points [-]

"The first time around" for the OPer is the OP, from which it is absent and in which you identify the problem as incomplete attempts.

Comment author: Logos01 02 November 2011 03:58:06AM 1 point [-]

I am not jkaufman. So I don't know that I follow what you're trying to say here. This means that either you or I are confused. In either case, no successful communication is currently occurring.

Could you please clarify what it is you're trying to say?

Comment author: Jordan 30 October 2011 06:54:16PM 3 points [-]

I was disappointed when I first looked into the C. elegans emulation progress. Now I'm not so sure it's a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it's not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.

Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we can see these larges scale neural patterns with scanners. If we uploaded a mouse brain, presumably we could get a rough idea that the emulation was working without ever hooking it up to a virtual body.

Comment author: Douglas_Knight 31 October 2011 04:50:42AM 3 points [-]

The lobster stomach ganglion, 30 neurons, but a ton of synapses, might be better for since its input and output are probably cleaner.

Comment author: slarson 01 November 2011 09:25:03PM 1 point [-]

Modeling lobster stomach ganglion work is going on at Brandeis and what they are finding is important: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2913134&tool=pmcentrez&rendertype=abstract

Given the results they are finding, and building on their methods it is not inappropriate to start thinking one level up to c. elegans

Comment author: khafra 31 October 2011 12:06:43PM 0 points [-]

Also because there's fictional prior art?

Comment author: bogdanb 31 October 2011 02:59:08PM 2 points [-]

Maybe there’s fictional prior art because the lobster stomack might be better.

Comment author: ciphergoth 01 November 2011 08:02:21AM 2 points [-]

If you're talking about Charlie Stross's Lobsters, yes this was inspired by Henry Abarbanel's work. He ran around the office going "They're uploading lobsters in San Diego!"

Comment author: jkaufman 31 October 2011 11:46:01AM 1 point [-]

"You would need to model the whole organism, and that seems very hard."

There are only ~100 muscle cells. People are trying to model the the brain-body combination, but that doesn't sound unreasonably hard to me.

Comment author: [deleted] 02 November 2011 04:44:59AM 1 point [-]

You need more than just muscle cells to do a whole-body emulation here -- c, elegans has roughly 1000 cells all told (varies depending on sex; hermaphrodites have somewhat fewer, males somewhat more).

Comment author: DavidPlumpton 30 October 2011 07:19:12AM 0 points [-]

IBM claims to be doing a cat brain equivalent simulation at the moment, albeit 600 time slower and not all parts of the brain.

Comment author: ciphergoth 31 October 2011 08:11:00AM *  13 points [-]

Henry Markram of the Blue Brain Project described this claim as a "hoax and a PR stunt", "shameful and unethical", and "mass deception of the public".

Comment author: spuckblase 31 October 2011 12:47:58PM 1 point [-]

Typo in the title!

Comment author: jkaufman 31 October 2011 02:07:18PM 0 points [-]

fixed