shminux comments on I played the AI Box Experiment again! (and lost both games) - Less Wrong

35 Post author: Tuxedage 27 September 2013 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 27 September 2013 09:44:06PM 2 points [-]

Conversations with Tuxedage indicate that substantive prior research on a gatekeeper opponent is a key element of an effective escape strategy. Such research seems to me to violate the spirit of the experiment -- the AI will know no more about the researcher than they reveal over the terminal

That's not quite right. The AI and the researcher may have been interacting on the variety of issues before the AI decided to break out. This is nearly identical to Tuxedage talking to his future opponents on IRC or similar interactive media before they decided to run the experiment.

Comment author: Betawolf 27 September 2013 09:52:40PM 2 points [-]

What I was getting at is that the current setup allows for side-channel methods of getting information on your opponent. (Digging to find their identity, reading their Facebook page, etc.).

While I accept that this interaction could be one of many between the AI and the researcher, this can be simulated in the anonymous case via a 'I was previously GatekeeperXXX, I'm looking to resume a game with AIYYY' declaration in the public channel while still preserving the player's anonymity.

Comment author: shminux 27 September 2013 10:54:48PM -1 points [-]

By the way, wouldn't Omegle with the common interests specified as AIBOX basically do the trick?

Comment author: Betawolf 28 September 2013 01:09:35AM 2 points [-]

For the basic interaction setup, yes. For a sense of community and for reliable collection of the logs, perhaps not. I'm also not sure how anonymous Omegle makes users to each other and itself.