falenas108 comments on Holden Karnofsky's Singularity Institute Objection 2 - Less Wrong

11 Post author: ciphergoth 11 May 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread.

Comment author: falenas108 11 May 2012 02:12:15PM 0 points [-]

If a programmer chooses to "unleash an AGI as an agent" with the hope of gaining power, it seems that this programmer will be deliberately ignoring conventional wisdom about what is safe in favor of shortsighted greed. I do not see why such a programmer would be expected to make use of any "Friendliness theory" that might be available. (Attempting to incorporate such theory would almost certainly slow the project down greatly, and thus would bring the same problems as the more general "have caution, do testing" counseled by conventional wisdom.)

But they may think they have a theory of friendliness that works, but actually creates a UFAI. If SI already had code that could be slapped on that creates friendliness, this type of programmer would use it.