You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

pleeppleep comments on I attempted the AI Box Experiment (and lost) - Less Wrong Discussion

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: pleeppleep 22 January 2013 01:51:40AM 1 point [-]

You don't have to be specific, but how would grossing out the gatekeeper bring you closer to escape?

Comment author: Qiaochu_Yuan 22 January 2013 02:07:06AM 3 points [-]

Psychological torture could help make the gatekeeper more compliant in general. I believe the keyword here is "traumatic bonding."

But again, I'm working from general principles here, e.g. those embodied in the tragedy of group selectionism. I have no reason to expect that "strategies that will get you out of the box" and "strategies that are not morally repugnant" have a large intersection. It seems much more plausible to me that most effective strategies will look like the analogue of cannibalizing other people's daughters than the analogue of restrained breeding.