You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Thomas comments on Survey: Risks from AI - Less Wrong Discussion

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: Thomas 13 June 2011 01:28:58PM *  1 point [-]
  1. 10% for 2015 or earlier. 50% for 2020 or earlier, 90% for 2030 or earlier.

  2. At least 50%.

  3. I think no human level AGI is necessary for that. A well calibrated worm level AGI could be enough. I am nearly sure, that it is possible, but the actual situation of the creating (accidentally or not) self enhancing "worms" is at least 50% probale to 2030. It needn't to be a catastrophe, but it may be. 50-50 prior again. The speed is almost certain to be fast. Say in days after launch.

  4. I am not sure what they could do about this. FAI as a defense will most probably be too late anyway,

  5. Yes.

  6. Many. Theorems proving Watson is just one of them. Or WolframAlpha programer, for example.

Comment author: Thomas 13 June 2011 02:36:41PM 0 points [-]

A bug. I can count 1,2,3,4,5,6 .. and did so in the above post. Visible under Edit option, but not when published. Funny,

Comment author: [deleted] 13 June 2011 02:52:04PM 1 point [-]

3, I think no human level AGI

Why is there a comma after the 3?

Comment author: Thomas 13 June 2011 02:54:48PM 0 points [-]

THNX. Now, when the comma has gone, the numbers are okay,