CarlShulman comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 22 January 2013 12:56:34AM 9 points [-]

Yeah, they'd both lack background knowledge to RP the conversation and would also, I presume, be much less willing to lose the money than if they'd ventured the bet themselves. Higher-stakes games are hard enough already (I was 1 for 3 on those when I called a halt). And if it did work against that demographic with unsolicited requests (which would surprise me) then there would be, cough, certain ethical issues.

Comment author: CarlShulman 22 January 2013 06:50:31AM 17 points [-]

I was the 1 success out of 3, preceding the two losses. I went into it with an intention of being indifferent to the stakes, driven by interest in seeing the methods. I think you couldn't win against anyone with a meaningful outside-of-game motive to win (for money or for status), and you got overconfident after playing with me, leading you to accept the other >$10 challenges and lose.

So I would bet against you winning any random high-stakes (including people who go in eager to report that they won for internet cred, but not people who had put the money in escrow or the equivalent) game, and expect a non-decent success rate for this:

(I haven't played this one but would give myself a decent chance of winning, against a Gatekeeper who thinks they could keep a superhuman AI inside a box, if anyone offered me sufficiently huge stakes to make me play the game ever again.)

Comment author: V_V 22 January 2013 01:38:42PM -2 points [-]

So you are basically saying that you didn't take the game seriously.

Even if your actual stakes were low, you should have played the role of a gatekeeper assigned to the task of guarding a potentially dangerous AI. Therefore, you player character should have had very high stakes.

Comment author: falenas108 22 January 2013 02:59:29PM 4 points [-]

No, high in-game stakes does not mean high out of game stakes.

In game, the gatekeeper could be convinced that it would be worth it to let the AI out of the box. If this happens, the gatekeeper has no motivation not to. However, if there is an external bet, then the gatekeeper always has motivation to not let the AI out, even if they think it would be best for the hypothetical world.

So, a game without stakes is actually most realistic, provided the gatekeeper is able to pretend they are actually in the scenario.

Comment author: V_V 22 January 2013 08:40:45PM 0 points [-]

Well, in-game, the gatekeeper has no reason to believe anything the AI could promise or threaten.