skepsci comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
Weakly related epiphany: Hannibal Lector is the original prototype of an intelligence-in-a-box wanting to be let out, in "The Silence of the Lambs"
When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."
...I'm not a psychopath, I'm just very competitive.
Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...
Far worse, he's persuaded people to exterminate humanity! (Counterfactually with significant probability.)
Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox
He's talking about an AI box. Eliezer has convinced people to let out a potentially unfriendly [1] and dangerously intelligent [2] entity before, although he's not told anyone how he did it.
[1] Think "paperclip maximizer".
[2] Think "near-omnipotent".
Thank you. I knew that, but didn't make the association.