Eliezer_Yudkowsky comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

Sort By: Leading

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 02 February 2010 07:24:40PM 3 points [-]

Well, suppose the situation is arbitrarily worse - you can only prevent 3^^^3 dustspeckings by torturing millions of sentient beings.

Comment author: cousin_it 02 February 2010 08:28:33PM *  5 points [-]

I think you misunderstood the question. Suppose the AI wants to prevent just 100 dustspeckings, but has reason enough to believe Dave will yield to the threat so no one will get tortured. Does this make the AI's behavior acceptable? Should we file this under "following reason off a cliff"?

Comment author: Eliezer_Yudkowsky 02 February 2010 08:34:06PM 9 points [-]

If it actually worked, I wouldn't question it afterward. I try not to argue with superintelligences on occasions when they turn out to be right.

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Comment author: bogdanb 03 February 2010 12:21:10AM *  5 points [-]

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can't find any nice way to do it is a bit of a stretch.

Comment author: cousin_it 02 February 2010 08:39:33PM 5 points [-]

What risk? The AI is lying about the torture :-) Maybe I'm too much of a deontologist, but I wouldn't call such a creature friendly, even if it's technically Friendly.

Comment author: arbimote 03 February 2010 03:53:18AM 4 points [-]

I was about to point out that the fascinating and horrible dynamics of over-the-top threats are covered in length in Strategy of Conflict. But then I realised you're the one who made that post in the first place. Thanks, I enjoyed that book.