utilitymonster comments on What is the best compact formalization of the argument for AI risk from fast takeoff? - Less Wrong

11 Post author: utilitymonster 13 March 2012 01:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread.

Comment author: utilitymonster 14 March 2012 08:20:36PM 1 point [-]

I prefer this briefer formalization, since it avoids some of the vagueness of "adequate preparations" and makes premise (6) clearer.

  1. At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast take-off)
  2. This AI will maximize a goal function.
  3. Given fast-take off and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
  4. Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
  5. Therefore, the superintelligent AI will have a decisive advantage
  6. Unless that AI is designed with goals that stably and extremely closely align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
  7. The AI will not be designed with goals that stably and extremely closely align with ours.
  8. Therefore, civilization will be ruined shortly after fast take-off.