skepsci comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: phaedrus 14 February 2012 08:35:29PM 12 points [-]

Weakly related epiphany: Hannibal Lector is the original prototype of an intelligence-in-a-box wanting to be let out, in "The Silence of the Lambs"

Comment author: Eliezer_Yudkowsky 15 February 2012 10:10:22AM 29 points [-]

When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."

...I'm not a psychopath, I'm just very competitive.

Comment author: skepsci 15 February 2012 11:46:50AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Comment author: wedrifid 18 February 2012 05:51:08AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Far worse, he's persuaded people to exterminate humanity! (Counterfactually with significant probability.)

Comment author: Michael_Sullivan 15 February 2012 12:12:27PM 6 points [-]

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

Comment author: JoachimSchipper 15 February 2012 12:11:29PM 6 points [-]

He's talking about an AI box. Eliezer has convinced people to let out a potentially unfriendly [1] and dangerously intelligent [2] entity before, although he's not told anyone how he did it.

[1] Think "paperclip maximizer".

[2] Think "near-omnipotent".

Comment author: skepsci 15 February 2012 01:01:25PM *  2 points [-]

Thank you. I knew that, but didn't make the association.