The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Certainly I'd think Eliezer (or anyone) would have much more trouble with an AI-box game if he had to get one person to convince another to let him out.
Eliezer surely would, but the fact observers being surprised was the point of an AI box experiment.
In short non-technical and not precisely accurate summary, if people can be surprised once when they were very confident and can then add on extra layers and be as confident as they were before one time they can do it forever.