You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

steven0461 comments on Survey: Risks from AI - Less Wrong Discussion

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: steven0461 13 June 2011 08:10:18PM *  3 points [-]
  1. define "global catastrophe halts progress"
  2. probability of what exactly conditional on what exactly?
  3. probability of what exactly conditional on what exactly?
  4. define "require"
  5. define "outweigh"

ETA: Since multiple people seem to find this comment objectionable for some reason I don't understand, let me clarify a little. For 1 it would make some difference to my estimate whether we're conditioning on literal halting of progress or just significant slowing, and things like how global the event needs to be. (This is a relatively minor ambiguity, but 90th percentiles can be pretty sensitive to such things.) For 2 it's not clear to me whether it's asking for the probability that a negative singularity happens conditional on nothing, or conditional on no disaster, or conditional on badly-done AI, or whether it's asking for the probability that it's possible that such a singularity will happen. All these would have strongly different answers. For 3 something similar. For 4 it's not clear whether to interpret "require" as "it would be nice", or "it would be the best use of marginal resources", or "without it there's essentially no chance of success", or something else. For 5 "outweigh" could mean outweigh in probability or outweigh in marginal value of risk reduction, or outweigh in expected negative value, or something else.

Comment author: XiXiDu 14 June 2011 10:13:49AM *  3 points [-]
  1. P(human-level AI by ? (year) | no wars ∧ no natural disasters ∧ beneficially political and economic development) = 10%/50%/90%/0%
  2. P(negative Singularity | badly done AI) = ?; P(extremely negative Singularity | badly done AI) = ? (where 'negative' = human extinction; 'extremely negative' = humans suffer;).
  3. P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within days | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within < 5 years | human-level AI on supercomputer with Internet connection) = ?
  4. How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
  5. What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
  6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?