Qiaochu_Yuan comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 22 January 2013 01:49:23AM 3 points [-]

For all but the last one it seems like you'd need an in-depth knowledge of the gatekeeper's psyche and personal life.

Of course. How else would you know which horrible, horrible things to say? (I also have in mind things designed to get a more visceral reaction from the gatekeeper, e.g. graphic descriptions of violence. Please don't ask me to be more specific about this because I really, really don't want to.)

Comment author: pleeppleep 22 January 2013 01:51:40AM 1 point [-]

You don't have to be specific, but how would grossing out the gatekeeper bring you closer to escape?

Comment author: Qiaochu_Yuan 22 January 2013 02:07:06AM 3 points [-]

Psychological torture could help make the gatekeeper more compliant in general. I believe the keyword here is "traumatic bonding."

But again, I'm working from general principles here, e.g. those embodied in the tragedy of group selectionism. I have no reason to expect that "strategies that will get you out of the box" and "strategies that are not morally repugnant" have a large intersection. It seems much more plausible to me that most effective strategies will look like the analogue of cannibalizing other people's daughters than the analogue of restrained breeding.