NancyLebovitz comments on 2013 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (558)
The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.
If you believe a "hard takeoff" to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?
In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?
My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.
Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.