Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Kaj_Sotala comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 20 December 2011 06:29:15PM 0 points [-]

Meaning, a 1% chance of superhuman intelligence within 5 years, right?

Sorry, I meant to say that it does not seem unreasonable to me that an AGI might take five years to self-improve. 1% does seem unreasonably low. I'm not sure what probability I would assign to "superhuman AGI in 5 years", but under say 40% seems quite low.