This post is shameless self-promotion, but I'm told that's probably okay in the Discussion section. For context, as some of you are aware, I'm aiming to model C. elegans based on systematic high-throughput experiments - that is, to upload a worm. I'm still working on course requirements and lab training at Harvard's Biophysics Ph.D. program, but this remains the plan for my thesis.
Last semester I gave this lecture to Marvin Minsky's AI class, because Marvin professes disdain for everything neuroscience, and I wanted to give his students—and him—a fair perspective of how basic neuroscience might be changing for the better, and seems a particularly exciting field to be in right about now. The lecture is about 22 minutes long, followed by over an hour of questions and answers, which cover a lot of the memespace that surrounds this concept. Afterward, several students reported to me that their understanding of neuroscience was transformed.
I only just now got to encoding and uploading this recording; I believe that many of the topics covered could be of interest to the LW community (especially those with a background in AI and an interest in brains), perhaps worthy of discussion, and I hope you agree.
I doubt it. I neither believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.
The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justified even in principle to be confident in that prediction.
In short, either their work is incrementally useful or is based on wild speculations about the possible discovery of unknown unknowns.
The real risks in my opinion are 1) that together they make many independent discoveries and someone builds something out of it 2) that a huge company like IBM, or a military project, builds something 3) the abstract possibility that some partly related field like neuroscience, or an unrelated field, provides the necessary insight to put two and two together.
Do you mean that intelligence is fundamentally interwoven with complex goals?
Do you mean that there is no point at which exploitation is favored over exploration?
I am not sure what you mean, could you elaborate? Do you mean something along the lines of what Ben Goertzel says in the following quote:
You further wrote:
What is your best guess at why people associated with SI are worried about AI risk?
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it? Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Essentially, yes. I think that defining an arbitrary entity's "goals" is not obviously possible, unless one simply accepts the trivial definition of "its goals are whatever it winds up causing"; I think intelligence is fundamentally interwoven with causing complex effects.
I mean that there is no point at which exploitation is favored exclusively over exploration.
... (read more)