wedrifid comments on Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (102)
At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.
If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what's already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happens to be already developed, and so the remaining challenge is to find the relevant math, assemble it in the right way, and see the answer. But that doesn't sound very likely.
Alternatively, a "team in the basement" could wait for the right breakthrough in the mainstream mathematics, and, being prepared, to apply it faster than anyone else to the problem. This seems more realistic, but may require the mainstream to know what to look for. Which involves playing with existential risk.
I would like to hear more from Eliezer on just how likely he thinks the 'nine people in the basement' development scenario is.
My own impression would be that a more gradual development of GAI is more likely but that that 'basement development' is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the 'nine people in the basement picture' either wishful thinking or 'my best plan of action' depending on whether or not we are Eliezer.