You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Desrtopa comments on I attempted the AI Box Experiment again! (And won - Twice!) - Less Wrong Discussion

36 Post author: Tuxedage 05 September 2013 04:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread.

Comment author: Desrtopa 05 September 2013 06:54:15PM 3 points [-]

There are a number of aspects of EY’s ruleset I dislike. For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. The EY Ruleset also allows the gatekeeper to check facebook, chat on IRC, or otherwise multitask whilst doing the experiment. I’ve found this to break immersion, and therefore it’s also banned in the Tuxedage Ruleset.

Eliezer's rules uphold the spirit of the experiment in that making things easier for the AI goes very much against what we should expect of any sort of gatekeeping procedure.

Comment author: SoundLogic 05 September 2013 07:14:49PM 6 points [-]

I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?

Comment author: [deleted] 29 March 2015 10:45:33PM 1 point [-]

For the experiment to work at all the Gatekeeper should read it yes, but having to think out clever responses or even typing full sentences all the time seems to stretch it. "I don´t want to talk about it" or simply silence could be allowed as a response as long as the Gatekeeper actually reads what the AI types.

Comment author: Nornagest 05 September 2013 07:18:55PM *  1 point [-]

We shouldn't gratuitously make things easier for the AI player, but rules functioning to keep both parties in character seem like they can only improve the experiment as a model.

I'm less sure about requiring the gatekeeper to read and consider all the AI player's statements. Certainly you could make a realism case for it; there's not much point in keeping an AI around if all you're going to do is type "lol" at it, except perhaps as an exotic form of sadism. But it seems like it could lead to more rules lawyering than it's worth, given the people likely to be involved.