You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

endoself comments on Survey: Risks from AI - Less Wrong Discussion

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: endoself 14 June 2011 03:32:25AM *  1 point [-]
  1. 2025, 2040, never.

  2. P(negative Singularity & badly done AGI) = 10%. P(negative Singularity | badly done AGI) ranges from 30% to 97%, depending on the specific definition of AGI. I'm not sure what 'extremely negative' means.

  3. 'Human level' is extremely fuzzy. An AGI could be far above humans in terms of mind design but less capable due to inferior hardware or vice versa.

  4. Vastly more.

  5. Other risks, including nanotech, are more likely, though a FAI could obviously manage nanotech risks.

  6. I'm going to answer this for a Singularity in 5 years, due to my dispute of the phrase 'human-level'. A solution to logical uncertainty would be more likely than anything else I can think of to result in a Singularity in 5 years, but I still would not expect it to happen, especially if the researchers were competent. Extreme interest from a major tech company or a government in the most promising approaches would be more likely to cause a Singularity in 5 years, but I doubt that fits the implied criteria for a milestone.