gjm comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2014 03:45:45PM *  1 point [-]

The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.

If you believe a "hard takeoff" to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?

In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?

Comment author: gjm 19 January 2014 05:13:47PM -1 points [-]

I have no strong opinion on whether a "hard takeoff" is probable. (Because I haven't thought about it a lot, not because I think the evidence is exquisitely balanced.) I don't see any particular reason to think that protein folding is the only possible route to a "hard takeoff".

What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that's able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it's hardly the only way one can envisage.