Pentashagon comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tuxedage 21 January 2013 04:30:55AM *  7 points [-]

<accolade> yeah

<accolade> I think for a superintelligence it would be a piece of cake to hack a human

<accolade> although I guess I'm Cpt. Obvious for saying that here :)

<Tuxedage> accolade, I actually have no idea what the consensus is, now that the experiment was won by EY

<Tuxedage> We should do a poll or something

<accolade> absolutely. I'm surprised that hasn't been done yet

Poll: Do you think a superintelligent AGI could escape an AI-Box, given that the gatekeepers are highly trained in resisting the AI's persuasive tactics, and that the guards are competent and organized?

Submitting...

Comment author: Pentashagon 21 January 2013 07:08:56PM 4 points [-]

I'm not even sure we could build the hardware to keep an unfriendly AI in a box even without any form of communication with humans. There are a lot of ways to leak EM radiation between systems (Tempest). There are bugs in hardware. There are probably unknown laws of physics and certainly unknown ways of exploiting the laws we do know.