Comment author: XiXiDu 13 April 2012 09:37:46AM *  5 points [-]

Could you write a few words on your perception of risks associated with technological progress and how projects like yours influence the chance of a possible positive or negative Singularity?

I'd also be interested in how you would answer these questions.

Comment author: davidad 13 April 2012 10:54:33AM 12 points [-]

There's the rub! I happen to value technological progress as an intrinsic good, so classifying a Singularity as "positive" or "negative" is not easy for me. (I reject the notion that one can factorize intelligence from goals, so that one could take a superintelligence and fuse it with a goal to optimize for paperclips. Perhaps one could give it a compulsion to optimize for paperclips, but I'd expect it to either put the compulsion on hold while it develops amazing fabrication, mining and space travel technologies, and never completely turn its available resources into paperclips since that would mean no chance of more paperclips in the future; or better yet, rapidly expunge the compulsion through self-modification.) Furthermore, I favor Kurzweil's smooth exponentials over "FOOM": although it may be even harder to believe that not only will there be superintelligences in the future, but that at no point between now and then will an objectively identifiable discontinuity happen, it seems more consistent with history. Although I expect present-human culture to be preserved, as a matter of historical interest if not status quo, I'm not partisan enough to prioritize human values over the Darwinian imperative. (The questions linked seem very human-centric, and turn on how far you are willing to go in defining "human," suggesting a disguised query. Most science is arguably already performed by machines.) In summary, I'm just not worried about AI risk.

The good news for AI worriers is that Eliezer has personally approved my project as "just cool science, at least for now" -- not likely to lead to runaway intelligence any time soon, no matter how reckless I may be. Given that and the fact that I've heard many (probably most) AI-risk arguments, and failed to become worried (quite probably because I hold the cause of technological progress very dear to my heart and am thus heavily biased - at least I admit it!), your time may be better spent trying to convince Ben Goertzel that there's a problem, since at least he's an immediate threat. ;)

Comment author: RomeoStevens 13 April 2012 08:04:55AM *  3 points [-]

I'm really happy you are working on this.

Comment author: davidad 13 April 2012 08:30:36AM 3 points [-]

Thanks! :)

Why I Moved from AI to Neuroscience, or: Uploading Worms

43 davidad 13 April 2012 07:10AM

This post is shameless self-promotion, but I'm told that's probably okay in the Discussion section. For context, as some of you are aware, I'm aiming to model C. elegans based on systematic high-throughput experiments - that is, to upload a worm. I'm still working on course requirements and lab training at Harvard's Biophysics Ph.D. program, but this remains the plan for my thesis.

Last semester I gave this lecture to Marvin Minsky's AI class, because Marvin professes disdain for everything neuroscience, and I wanted to give his studentsand hima fair perspective of how basic neuroscience might be changing for the better, and seems a particularly exciting field to be in right about now. The lecture is about 22 minutes long, followed by over an hour of questions and answers, which cover a lot of the memespace that surrounds this concept. Afterward, several students reported to me that their understanding of neuroscience was transformed.

I only just now got to encoding and uploading this recording; I believe that many of the topics covered could be of interest to the LW community (especially those with a background in AI and an interest in brains), perhaps worthy of discussion, and I hope you agree.

Comment author: gwern 31 October 2011 06:28:10PM 3 points [-]

Bleh, I see I was again unclear about what I meant by nailing down - more precisely, how would one judge whatever has been accomplished by 2014/2020 as being 'complete' or 'functional'? Frequently there are edge cases (there's this paper reporting one group's abandoned simulation which seemed complete oh except for this wave pattern didn't show up and they had to simplify that...). But since you were good enough to write them:

  1. http://predictionbook.com/predictions/4123
  2. http://predictionbook.com/predictions/4124
Comment author: davidad 02 November 2011 03:04:10AM 2 points [-]

Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I'm not trying to incentivize other people to do it for me, I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway. However, I'm open to arguments to the contrary.

Comment author: JoshuaZ 31 October 2011 11:54:28PM *  4 points [-]

I'm curious where you'd estimate 50% chance of it existing and where you'd estimate 90%.

The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn't look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.

That high an estimate seems to be likely vulnerable to the planning fallacy.

Overall, your estimate seems to be too confident, the 2020 estimate especially so.

Comment author: davidad 02 November 2011 02:58:03AM 1 point [-]

I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.

Comment author: gwern 31 October 2011 05:04:10PM 7 points [-]

In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

How would you nail those two predictions down into something I could register on PredictionBook.com?

Comment author: davidad 31 October 2011 05:43:34PM *  6 points [-]

"A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence

"A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

Comment author: jkaufman 31 October 2011 11:58:35AM 2 points [-]

"Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data."

This suggests that even a full 5nm SEM imaging pass over the brain would not be enough information about the individual to emulate them.

Comment author: davidad 31 October 2011 05:37:47PM 9 points [-]

It's worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.

That said, given the current state of knowledge, I don't think there's good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we'll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don't even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it's hard to say with confidence that SEM will capture them.

However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it's hard to say what they'll look like yet.

Comment author: atucker 31 October 2011 04:28:27PM 0 points [-]

What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.

Am I hearing hints of Tononi here?

Comment author: davidad 31 October 2011 05:27:27PM 2 points [-]

It's fair to say that I am confident Tononi is on to something (although whether that thing deserves the label "consciousness" is a matter about which I am less confident). However, Tononi doesn't seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I'd expect to be necessary to get enough information for any sort of emulation.

Comment author: atucker 30 October 2011 04:06:38AM 10 points [-]

David Dalrymple is also trying to emulate all of C. elegans, and was at the Singularity Summit.

http://syntheticneurobiology.org/people/display/144/26

Comment author: davidad 31 October 2011 09:46:47AM 30 points [-]

That's me. In short form, my justification for working on such a project where many have failed before me is:

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.

Comment author: erratio 03 October 2010 11:06:28PM 37 points [-]

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

Comment author: davidad 13 October 2010 09:47:24PM 0 points [-]

Upvoted for underconfidence.

View more: Prev | Next