You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: ciphergoth 30 June 2015 09:51:57AM 8 points [-]

Three more myths, from Luke Muehlhauser:

  • We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.
  • We don’t think AIs will want to wipe us out. Rather, we worry they’ll wipe us out because that is the most effective way to satisfy almost any possible goal function one could have.
  • AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.

A similar list by Rob Bensinger:

  • Worrying about AGI means worrying about narrow AI
  • Worrying about AGI means being confident it’s near
  • Worrying about AGI means worrying about “malevolent” AI
Comment author: gjm 30 June 2015 12:25:52PM 2 points [-]

because that is the most effective way to satisfy almost any possible goal function

Perhaps more accurate: because that is a likely side effect of the most effective way (etc.).

Comment author: ciphergoth 03 July 2015 04:36:15PM 1 point [-]

Not a side effect. The most effective way is to consume the entire cosmic commons just in case all that computation finds a better way. We have our own ideas about what we'd like to do with the cosmic commons, and we might not like the AI doing that; we might even act to try and prevent it or slow it down in some way. Therefore killing us all ASAP is a convergent instrumental goal.