RedErin comments on I played as a Gatekeeper and came pretty close to losing in a couple of occasions. Logs and a brief recap inside. - Less Wrong

5 [deleted] 08 February 2015 04:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 08 February 2015 04:53:21PM 7 points [-]

Whoa, someone actually letting the transcript out. Has that ever been done before?

Comment author: RedErin 10 February 2015 08:08:00PM 1 point [-]

Whoa, someone actually letting the transcript out. Has that ever been done before?

Yes, but only when the gatekeeper wins. If the AI wins, then they wouldn't want the transcript to get out, because then their strategy would be less effective next time they played.

Comment author: Jiro 17 February 2015 05:16:18PM 0 points [-]

I would imagine that if we ever actually build such an AI, we would conduct some AI-box experiments to determine some AI strategies and figure out how to counter them. Humans who become the gatekeeper for the actual AI would be given the transcripts of AI-box experiment sessions to study as part of their gatekeeper training.

Letting out the transcript, then, would be a good thing. It would make the AI player's job harder because in the next experiment the human player will be aware of those strategies, but when facing an actual AI, the human will be aware of those strategies.

Comment author: lmm 13 February 2015 07:24:30PM 0 points [-]

Doesn't the same logic apply to the gatekeeper?

Comment author: RedErin 13 February 2015 09:20:33PM 0 points [-]

The Gatekeeper usually wants to publish if they win, to brag. Their strategy isn't usually a secret, it's simply to resist.