You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Brillyant comments on I attempted the AI Box Experiment again! (And won - Twice!) - Less Wrong Discussion

36 Post author: Tuxedage 05 September 2013 04:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread.

Comment author: Brillyant 07 September 2013 11:00:24PM 2 points [-]

I'm fascinated by these AI Box experiments. (And reading about the psychology and tactics involved reminds me of my background as an Evangelical Christian.)

Is it possible to lose as the Gatekeeper if you are not already sufficiently familiar (and concerned) with future AI risks and considerations? Do any of the AI's "tricks" work on non-LWers?

Is there perhaps a (strong) correlation between losing Gatekeepers and those who can successfully hypnotized? (As I understand it, a large factor in what makes some people very conducive to hypnosis is that they are very suggestible.)

I just can't imagine losing as the Gatekeeper... I don't sense I'm capable of the level of immersion necessary. I think I'd just sincerely play-along, wait out the allotted time and collect my winnings.

Comment author: ChristianKl 08 September 2013 05:38:20PM 1 point [-]

Is there perhaps a (strong) correlation between losing Gatekeepers and those who can successfully hypnotized? (As I understand it, a large factor in what makes some people very conducive to hypnosis is that they are very suggestible.)

The way Tuxedage seems to propose seems to involve triggering a sufficiently strong emotional trauma that draws you into the game. I don't think you need the thing you traditionally associate with hypnotizability for that task. The same way you don't need hypnotizability to get someone to speak when you use electro shocks.