Decius comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tuxedage 21 January 2013 04:30:55AM *  7 points [-]

<accolade> yeah

<accolade> I think for a superintelligence it would be a piece of cake to hack a human

<accolade> although I guess I'm Cpt. Obvious for saying that here :)

<Tuxedage> accolade, I actually have no idea what the consensus is, now that the experiment was won by EY

<Tuxedage> We should do a poll or something

<accolade> absolutely. I'm surprised that hasn't been done yet

Poll: Do you think a superintelligent AGI could escape an AI-Box, given that the gatekeepers are highly trained in resisting the AI's persuasive tactics, and that the guards are competent and organized?

Submitting...

Comment author: Decius 21 January 2013 07:37:49AM *  1 point [-]

If the gatekeepers are evaluating the output of the AI and deciding whether or not to let the AI out, it seems trivial to say that there is something they could see that would cause them to let the AI out.

If the gatekeepers are simply playing a suitably high-stakes game where they lose iff they say they lose, I think that no AI ever could beat a trained rationalist.