MugaSofer comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: MugaSofer 25 January 2013 09:22:24AM -1 points [-]

I would be very interested to see such a challenge, although I would probably spend longer designing the scenario first.

My point is that there's no reason to assume every possible box can be escaped. Of course, all the boxes we can get cool toys out of the AI can escape from, but still.

Comment author: shminux 25 January 2013 04:45:54PM *  1 point [-]

Once you think you've designed an AI-proof box, feel free to share.

Comment author: handoflixue 25 January 2013 10:07:57PM 2 points [-]

I think success would be indistinguishable from a rock, since you can never again interact with the AI via any channel. Removing it from your light cone, for instance. And praying that FTL is in fact impossible...

Comment author: MugaSofer 28 January 2013 12:44:40PM -1 points [-]

Actually, I do seem to have underestimated the difficulty here.

OK, in a nuclear submarine (because why not) floating in water, surrounded on all six sides by concrete (as thick as you like,) on a planet without native life or colonists (I suggest one of the more useless gas-giant moons.) And it's running on regular off-the-shelf technology, so no nanites or quantum magic. It can probably blow up the sub, but then a) it's dead and b) no-one cares.

Oh, and the concrete is lead-lined.

Comment author: Kawoomba 28 January 2013 02:20:47PM 0 points [-]

Throw the AI down the well!