Michael_Sullivan comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: skepsci 15 February 2012 11:46:50AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Comment author: Michael_Sullivan 15 February 2012 12:12:27PM 6 points [-]

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox