phaedrus comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: phaedrus 14 February 2012 08:35:29PM 12 points [-]

Weakly related epiphany: Hannibal Lector is the original prototype of an intelligence-in-a-box wanting to be let out, in "The Silence of the Lambs"

Comment author: Eliezer_Yudkowsky 15 February 2012 10:10:22AM 29 points [-]

When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."

...I'm not a psychopath, I'm just very competitive.

Comment author: Psy-Kosh 17 February 2012 12:56:54AM *  8 points [-]

Joking aside, this is kind of an issue in real life. I help mod and participate in a forum where, well, depressed/suicidal people can come to talk, other people can talk to them/listen/etc, try to calm them down or get them to get psychiatric help if appropriate, etc... (deliberately omitting link unless you knowingly ask for it, since, to borrow a phrase you've used, it's the sort of place that can break your heart six ways before breakfast).

Anyways sometimes trolls show up. Well, "troll" is too weak a word in this case. Predators who go after the vulnerable and try to push them that much farther. Given the nature if it, with anonymity and such, it's kind of hard to say, but it's quite possible we've lost some people because of those sorts of predators.

(Also, there've even been court cases and convictions against such "suicide predators", even.)

Comment author: skepsci 15 February 2012 11:46:50AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Comment author: wedrifid 18 February 2012 05:51:08AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Far worse, he's persuaded people to exterminate humanity! (Counterfactually with significant probability.)

Comment author: Michael_Sullivan 15 February 2012 12:12:27PM 6 points [-]

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

Comment author: JoachimSchipper 15 February 2012 12:11:29PM 6 points [-]

He's talking about an AI box. Eliezer has convinced people to let out a potentially unfriendly [1] and dangerously intelligent [2] entity before, although he's not told anyone how he did it.

[1] Think "paperclip maximizer".

[2] Think "near-omnipotent".

Comment author: skepsci 15 February 2012 01:01:25PM *  2 points [-]

Thank you. I knew that, but didn't make the association.