You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

GMHowe comments on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. - Less Wrong Discussion

6 [deleted] 27 January 2015 10:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: GMHowe 29 January 2015 01:29:35AM *  2 points [-]

I was not aware of Tuxedage's ruleset. However any ruleset that allows for the AI to win without being explicitly released by the gatekeeper is problematic.

If asd had won due to the gatekeeper leaving it would only have demonstrated that being unpleasant can cause people to disengage from conversation, which is different from demonstrating that it is possible to convince a person to release a potentially dangerous AI.

Comment author: wobster109 31 January 2015 02:23:01AM 0 points [-]

I kind of agree upon reflection. Tuxedage's ruleset seems tailored for games where there is money on the line, and in that case it feels very unfair to say GK can leave right away. GK would be heavily incentivized to leave immediately, since that would get GK's charity a guaranteed donation.