DaFranker comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 22 January 2013 02:26:07PM 15 points [-]

Better method, set up a script that responds to any and all text with "AI DESTROYED" if you have to wait for the person to start typing, they may try to bore you into opening your eyes wondering why the experiment hasn't started yet, and you might accidentally read something.

All good security measures. The key feature seems to be that they are progressively better approximations of not having an unsafe AI with a gatekeeper and an IRC channel in the first place!

Comment author: DaFranker 22 January 2013 02:59:28PM 3 points [-]

Indeed. In fact, most of the solutions I've seen mentioned lately are all of one trend that edges closer and closer towards:

"Build a completely unsafe and suspicious AI, put it on a disconnected small computer with a bunch of nanites for self-modification and a large power reserve, with so many walls and physical barriers that it is impossible for the AI to get through with the amount of energy it could generate if it turned half of its materials into antimatter, and then put no input or output channels there of any kind, just have a completely useless multi-trillion-dollar marvel of science and engineering sitting in the practical equivalent of a black hole."

Comment author: MugaSofer 23 January 2013 03:00:06PM -2 points [-]

What if the AI uses the walls as fuel? Better to just keep it stuck on your server farm ;)