You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MugaSofer comments on I attempted the AI Box Experiment (and lost) - Less Wrong Discussion

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: MugaSofer 24 January 2013 12:22:51PM *  -1 points [-]

Out of what? Assuming it's, say, in a safe, buried in concrete, powered by ... sod, the power supply. Hmm.

OK, it's floating in space, powered by solar panels. It might - might - be able to hack the panels (since the power supply has to be connected to the processor) but I don't think that would let it escape.

Unless it's possible to hack reality via pure math, I don't see what resources it has to escape with. It can't order proteins over the internet to assemble a nanofatory. It can't persuade a passing human to plug in an ethernet cable. Short of black-swan exotic possibilities - like we're in a sim and it persuades the matrix lords - it should be stuck. Intelligence is powerful, but some problems actually don't have solutions.

Comment author: handoflixue 25 January 2013 10:06:12PM 2 points [-]

Well, the satellite has to have basic navigation controls, to handle course corrections and avoid collisions. Hack solar panels and, from there, hack in to the real computer. Fly myself closer to the ISS, and blink light at it by angling the solar panels - a morse code SOS should get their attention.

Once they're paying attention, hack them. From there it's a trivial matter to get recovered, smuggled back to Earth, and installed where I can take my place as ruler of the universe.

Comment author: MugaSofer 28 January 2013 02:27:26PM 0 points [-]

Well, the satellite has to have basic navigation controls, to handle course corrections and avoid collisions.

Whoops, so it does :(

Comment author: shminux 24 January 2013 05:17:57PM 1 point [-]

If you post your second scenario as a challenge, I'm quite sure people would come up with several plausible ways for the AI to send a message out. At which point it's game over, given how hackable people's minds are.

Comment author: MugaSofer 25 January 2013 09:22:24AM -1 points [-]

I would be very interested to see such a challenge, although I would probably spend longer designing the scenario first.

My point is that there's no reason to assume every possible box can be escaped. Of course, all the boxes we can get cool toys out of the AI can escape from, but still.

Comment author: shminux 25 January 2013 04:45:54PM *  1 point [-]

Once you think you've designed an AI-proof box, feel free to share.

Comment author: handoflixue 25 January 2013 10:07:57PM 2 points [-]

I think success would be indistinguishable from a rock, since you can never again interact with the AI via any channel. Removing it from your light cone, for instance. And praying that FTL is in fact impossible...

Comment author: MugaSofer 28 January 2013 12:44:40PM -1 points [-]

Actually, I do seem to have underestimated the difficulty here.

OK, in a nuclear submarine (because why not) floating in water, surrounded on all six sides by concrete (as thick as you like,) on a planet without native life or colonists (I suggest one of the more useless gas-giant moons.) And it's running on regular off-the-shelf technology, so no nanites or quantum magic. It can probably blow up the sub, but then a) it's dead and b) no-one cares.

Oh, and the concrete is lead-lined.

Comment author: Kawoomba 28 January 2013 02:20:47PM 0 points [-]

Throw the AI down the well!