shokwave comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 19 December 2010 03:49:31PM 9 points [-]

Or you build yourself a superweapon that you use to escape, and then go on to shut down your company's weapons division and spend your spare time being a superhero and romancing your assistant and fighting a pitched battle with a disloyal employee.

Comment author: shokwave 19 December 2010 04:26:01PM 9 points [-]

This reply and its parent comment constitute the "Iron Man Argument" against any kind of "put the AI in a box" approach to AGI and friendliness concerns. I predict it will be extremely effective.