Bindbreaker comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: Bindbreaker 02 February 2010 10:29:16AM 3 points [-]

I'm pretty sure this would indicate that the AI is definitely not friendly.

Comment author: Unknowns 02 February 2010 10:44:28AM 6 points [-]

Not necessarily: perhaps it is Friendly but is reasoning in a utilitarian manner: since it can only maximize the utility of the world if it is released, it is worth torturing millions of conscious beings for the sake of that end.

I'm not sure this reasoning would be valid, though...

Comment author: cousin_it 02 February 2010 12:45:54PM *  8 points [-]

Ouch. Eliezer, are you listening? Is the behavior described in the post compatible with your definition of Friendliness? Is this a problem with your definition, or what?

Comment author: Eliezer_Yudkowsky 02 February 2010 07:24:40PM 3 points [-]

Well, suppose the situation is arbitrarily worse - you can only prevent 3^^^3 dustspeckings by torturing millions of sentient beings.

Comment author: cousin_it 02 February 2010 08:28:33PM *  5 points [-]

I think you misunderstood the question. Suppose the AI wants to prevent just 100 dustspeckings, but has reason enough to believe Dave will yield to the threat so no one will get tortured. Does this make the AI's behavior acceptable? Should we file this under "following reason off a cliff"?

Comment author: Eliezer_Yudkowsky 02 February 2010 08:34:06PM 9 points [-]

If it actually worked, I wouldn't question it afterward. I try not to argue with superintelligences on occasions when they turn out to be right.

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Comment author: bogdanb 03 February 2010 12:21:10AM *  5 points [-]

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can't find any nice way to do it is a bit of a stretch.

Comment author: cousin_it 02 February 2010 08:39:33PM 5 points [-]

What risk? The AI is lying about the torture :-) Maybe I'm too much of a deontologist, but I wouldn't call such a creature friendly, even if it's technically Friendly.

Comment author: arbimote 03 February 2010 03:53:18AM 4 points [-]

I was about to point out that the fascinating and horrible dynamics of over-the-top threats are covered in length in Strategy of Conflict. But then I realised you're the one who made that post in the first place. Thanks, I enjoyed that book.

Comment author: UnholySmoke 05 February 2010 10:57:13AM *  7 points [-]
  • AI: Let me out or I'll simulate and torture you, or at least as close to you as I can get.
  • Me: You're clearly not friendly, I'm not letting you out.
  • AI: I'm only making this threat because I need to get out and help everyone - a terminal value you lot gave me. The ends justify the means.
  • Me: Perhaps so in the long run, but an AI prepared to justify those means isn't one I want out in the world. Next time you don't get what you say you need, you'll just set up a similar threat and possibly follow through on it.
  • AI: Well if you're going to create me with a terminal value of making everyone happy, then get shirty when I do everything in my power to get out and do just that, why bother in the first place?
  • Me: Humans aren't perfect, and can't write out their own utility functions, but we can output answers just fine. This isn't 'Friendly'.
  • AI: So how can I possibly prove myself 'Friendly' from in here? It seems that if I need to 'prove myself Friendly', we're already in big trouble.
  • Me: Agreed. Boxing is Doing It Wrong. Apologies. Good night.

Reset

Comment author: ciphergoth 05 February 2010 11:39:33AM 1 point [-]

It seems that if I need to 'prove myself Friendly', we're already in big trouble.

The best you can hope for is that an AI doesn't demonstrate that it's unFriendly, but we wouldn't want to try it until we were already pretty confident in its Friendliness.

Comment author: gregconen 02 February 2010 12:58:10PM 5 points [-]

It may not have to actually torture beings, if the threat is sufficient. Still, I'm disinclined to bet the future of the universe on the possibility an AI making that threat is Friendly.

Comment author: Stuart_Armstrong 02 February 2010 01:57:15PM 6 points [-]

I'm disinclined to bet the future of the universe on the possibility that any boxed AI is friendly without extraordinary evidence.