Sly comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: Sly 03 February 2010 11:46:52AM *  0 points [-]

I laugh and leave the room, thinking to myself that maybe the AI is not that smart after all. Returning with a hammer to joyfully turn this unfriendly AI into scrap metal.

A couple points that influence this reaction:

1 - Unless the AI has access to my brain it cannot create perfect copies of me. Furthermore, the computation required to do this seems rather intense for the first AI created, running on human made hardware.

2 - It has no good reason to actually act on the threat. Either I choose to let it out or I do not; either way, it is a waste of computation to then make the simulations. My descision has already been made.

3- Assuming the first two points are invalid, if the AI can make a perfect copy of me it would know that my response to this question is one of destruction. I am not a fan of threats. The AI does not make the threat in the first place. An AI with this capability can choose a more compelling argument.

Comment author: prase 03 February 2010 01:18:54PM 0 points [-]

Point 3 is invalid. If the AI makes the threat, it doesn't mean that it has made the simulation already and knows your answer. Maybe it is exhausting for the AI to simulate you, and will only do it if you don't let it out.

Point 2 is actually also invalid. As people sometimes fulfil threats as a pure act of vengeance, without hope of actually improving something, there is no reason to assume that the AI will be different. At least it wasn't stated in the premises of the scenario.

Comment author: Sly 04 February 2010 04:49:55AM 0 points [-]

I suppose those two points rely on assumptions I made about the theoretical AIs behavior. I was thinking the AI acts in ways to optimize it's release chance. If it does not do this, then yes those points are problematic.

Comment author: prase 04 February 2010 07:57:55AM 0 points [-]

There can be some vindictiveness built in the AI in order to increase the release chance, by circumventing the type of defense you have stated in your second point.

Comment author: nazgulnarsil 03 February 2010 04:54:27PM 0 points [-]

vengeance is a means to raise the perceived cost of attacking you. it basically says "if you attack me, I will experience emotions that cause me to devote an inordinate amount of resources making your life miserable".