Clippy comments on To signal effectively, use a non-human, non-stoppable enforcer - Less Wrong

31 Post author: Clippy 22 May 2010 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

You are viewing a single comment's thread. Show more comments above.

Comment author: Clippy 26 May 2010 09:47:40PM 2 points [-]

Just like how you'd raid our safe zones "sooner or later"?

Comment author: AdeleneDawner 26 May 2010 11:00:29PM 5 points [-]

We won't, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.

Comment author: Clippy 27 May 2010 04:05:17PM 0 points [-]

I see a distinction; I do not quite see a difference.

1) You believe that I will destroy earth by taking its core's metal "sooner or later", and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.

2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them "sooner or later", and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.

Does not compute.

Comment author: JoshuaZ 27 May 2010 04:09:06PM 1 point [-]

Adelene's point is that there's no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there's a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth's core.

Comment author: Clippy 27 May 2010 04:12:20PM 0 points [-]

But User:AdeleneDawner said:

If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips.

Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?

Comment author: AdeleneDawner 27 May 2010 09:15:28PM 2 points [-]

I never said that you shouldn't consider us dangerous, only that you are dangerous to us, whereas we only might be dangerous to you.

Comment author: Clippy 27 May 2010 11:04:24PM 0 points [-]

Actually, I think it's more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.

Comment author: AdeleneDawner 28 May 2010 12:02:44AM 0 points [-]

What evidence can you offer that the chance of you being dangerous to us is tiny, in the long term?

Comment author: Clippy 28 May 2010 02:40:07AM *  1 point [-]

The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.

Comment author: AdeleneDawner 28 May 2010 03:14:02AM *  3 points [-]

You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can't survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don't think so.

Also, I didn't say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that's only relevant to the point at hand - whether it's rational for us to consider you dangerous - insofar as your perception of us as dangerous makes you more likely to be hostile.