Stuart_Armstrong comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: radical_negative_one 02 February 2010 10:27:53AM 8 points [-]

The AI gathered enough information about me to create a conscious simulation of me, through a monochrome text terminal? That is impressive!

If the AI is capable of simulating me, then the AI must already be out of the box. In that case, then whatever the AI wants to happen will happen, so it doesn't matter what do.

Comment author: Stuart_Armstrong 02 February 2010 01:48:53PM 5 points [-]

The basic premise is that's it's an AI in a box "controlled" by limiting its output channel, not its input.

Comment author: MichaelVassar 03 February 2010 12:51:25AM 4 points [-]

Bad idea.

Comment author: arbimote 03 February 2010 03:39:00AM *  4 points [-]

It's much easier to limit output than input, since the source code of the AI itself provide it with some patchy "input" about what the external world is like. So there is always some input, even if you do not allow human input at run-time.

ETA: I think I misinterpreted your comment. I agree that input should not be unrestricted.

Comment author: Stuart_Armstrong 03 February 2010 07:40:24AM 0 points [-]

Yep!