You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

beriukay comments on Survey: Risks from AI - Less Wrong Discussion

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: beriukay 13 June 2011 02:54:35PM 0 points [-]
  1. 10% at 2030. 50% at 2050. 90% at 2082 (the year I turn 100).

  2. The probability that the Singularity Institute fails in the bad way? Hmm. I'd say 40%.

  3. Hours, 5%. Days, 30%. Less than 5 years, 75%. If it can't do it in the time it takes for your average person to make it through high school, then I don't think it will be able to do it at all. Or in some other respect, it isn't even trying.

  4. much more. I don't think we have too many chefs in the kitchen at this point.

  5. Seriously don't know. It seems like a very open question, like asking if a bear is more dangerous than a tiger. Are we talking worst case? Then no, I think they both end the same for humans. Are we talking likely case? Then I don't know enough about nanotech or AI to say.

  6. Realistically? I suppose in the future, consumer-grade computer had the computational power of our current best supercomputer, and there was some equivalent to the X-Prize for developing a human-level AI, I would expect someone to win the prize within 5 years.