DanArmak comments on Holden Karnofsky's Singularity Institute Objection 2 - Less Wrong

11 Post author: ciphergoth 11 May 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread.

Comment author: DanArmak 12 May 2012 03:35:55PM *  3 points [-]

My layman's understanding of the SI position is as follows:

  • Many different kinds of AIs are possible, and humans will keep building AIs of different types and power to achieve different goals.
  • Such attempts have a chance of becoming AGI strong enough to reshape the world, and AGI further has a chance of being uFAI. It doesn't matter whether the programmers' intent matches the result in these cases, or what the exact probabilities are. It matters only that the probability of uFAI of unbounded power is non-trivial (pick your own required minimum probability here).
  • The only way to prevent this is to make a FAI that will expand its power to become a singleton, preventing any other AI/agent from gaining superpowers in its future light cone. Again, it doesn't matter if there is a chance of failure in this mission, as long as success is likely enough (pick your required probability, but I think 10% has already been suggested as sufficient in this thread).
  • Making a super-powerful FAI will of course also solve a huge number of other problems humans have, which is a nice bonus.