You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Betawolf comments on I played the AI Box Experiment again! (and lost both games) - Less Wrong Discussion

35 Post author: Tuxedage 27 September 2013 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread.

Comment author: Betawolf 27 September 2013 09:15:14PM 9 points [-]

Prompted by Tuxedage learning to win, and various concerns about the current protocol, I have a plan to enable more AI-Box games whilst preserving the logs for public scrutiny.

See this: http://bæta.net/posts/anonymous-ai-box.html

Comment author: Tuxedage 27 September 2013 09:47:45PM 7 points [-]

I support this and I hope it becomes a thing.

Comment author: Gurkenglas 28 September 2013 02:47:53PM *  3 points [-]

You forgot to adress Eliezers point that "10% of AI box experiments were won even by the human emulation of an AI" is more effective against future proponents of deliberately creating boxed AIs than "Careful, the guardian might be persuaded by these 15 arguments we have been able to think of".

I don't think the probability of "AIs can find unboxing arguments we didn't" is sub-1 enough for preparation to matter. If there is any chance of a mathematical exhaustability of those arguments, its research should be conducted by a select circle of individuals that won't disclose our critical unboxers until a proof of safety.

Comment author: shminux 27 September 2013 09:44:06PM 2 points [-]

Conversations with Tuxedage indicate that substantive prior research on a gatekeeper opponent is a key element of an effective escape strategy. Such research seems to me to violate the spirit of the experiment -- the AI will know no more about the researcher than they reveal over the terminal

That's not quite right. The AI and the researcher may have been interacting on the variety of issues before the AI decided to break out. This is nearly identical to Tuxedage talking to his future opponents on IRC or similar interactive media before they decided to run the experiment.

Comment author: Betawolf 27 September 2013 09:52:40PM 2 points [-]

What I was getting at is that the current setup allows for side-channel methods of getting information on your opponent. (Digging to find their identity, reading their Facebook page, etc.).

While I accept that this interaction could be one of many between the AI and the researcher, this can be simulated in the anonymous case via a 'I was previously GatekeeperXXX, I'm looking to resume a game with AIYYY' declaration in the public channel while still preserving the player's anonymity.

Comment author: shminux 27 September 2013 10:54:48PM -1 points [-]

By the way, wouldn't Omegle with the common interests specified as AIBOX basically do the trick?

Comment author: Betawolf 28 September 2013 01:09:35AM 2 points [-]

For the basic interaction setup, yes. For a sense of community and for reliable collection of the logs, perhaps not. I'm also not sure how anonymous Omegle makes users to each other and itself.