You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Survey: Risks from AI - Less Wrong Discussion

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 June 2011 10:13:49AM *  3 points [-]
  1. P(human-level AI by ? (year) | no wars ∧ no natural disasters ∧ beneficially political and economic development) = 10%/50%/90%/0%
  2. P(negative Singularity | badly done AI) = ?; P(extremely negative Singularity | badly done AI) = ? (where 'negative' = human extinction; 'extremely negative' = humans suffer;).
  3. P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within days | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within < 5 years | human-level AI on supercomputer with Internet connection) = ?
  4. How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
  5. What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
  6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?