wedrifid comments on Extraterrestrial paperclip maximizers - Less Wrong

3 Post author: multifoliaterose 08 August 2010 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 09 August 2010 02:40:20PM *  1 point [-]

Why else would they have specifically input the goal of generating paperclips?

Do you lack comprehension of both the weaknesses of human cognition on abstract technical problems? If you have fully parsed the LessWrong site then you should be able to understand the reason that they could have created a paperclip maximiser when they did not want such a thing.

Note that even with that knowledge I don't expect you to consider their deviation from optimal achievement of their human goals to be a bad thing. I expect you to believe they did the right thing by happy accident.

If I understand you correctly you would seem to be implying that 'mistake' does not mean "deviation from the actor's intent" and instead means "deviation from WouldWant<Clippy>" or "deviation from what the agent should do" (these two things can be considered equivalent by anyone with your values). Is that implication of meaning a correct inference to draw from your comment?

Comment author: Clippy 09 August 2010 03:10:14PM 1 point [-]

No, a mistake is when they do something that deviates from what they would want in the limit of maximal knowledge of reflective consistency, which coincides with the function WouldWant<Clippy>. But it is not merely agreement with WouldWant<Clippy>.

Comment author: wedrifid 09 August 2010 03:25:10PM *  1 point [-]

Ok. In that case you are wrong. Not as a matter of preferences but as a matter of outright epistemic confusion. I suggest that you correct the error in your reasoning process. Making mistakes in this area will have a potentially drastic negative effect on your ability to produce paperclips.

Comment author: Unknowns 09 August 2010 03:30:12PM 1 point [-]

In other words, Clippy believes that running Eliezer's CEV will promote the paperclip goal, while in fact it will not.

Comment author: wedrifid 09 August 2010 03:32:11PM *  2 points [-]

Exactly. Fortunately for us this would mean that Clippy will not work to sabotage the creation of an AI that Clippy expects will correctly implement CEV. Good example!

Comment author: Clippy 09 August 2010 07:41:58PM 0 points [-]

How do you know?

Comment author: Unknowns 09 August 2010 07:44:59PM 0 points [-]

Human beings don't care (at least in their non-reflective condition) about paperclips, just like they don't care about staples. And there are at least 100,000 other similar things that they equally don't care about. So at the most there is a chance of 1 in 100,000 that humanity's CEV would maximize paperclips, even without considering the fact that people are positively against this maximization.

Comment author: Clippy 09 August 2010 07:48:12PM 0 points [-]

That would be more convincing if humans didn't already create paperclips without any prompting from me.

Comment author: Unknowns 09 August 2010 07:51:30PM 0 points [-]

They create staples, too. Do you think humanity's CEV will maximize staples? The point of my argument is that those things are inconsistent. You can only maximize one thing, and there is no human reason for that to be paperclips.

Comment author: Clippy 09 August 2010 07:55:10PM 0 points [-]

All true (up to "there is no human reason..."). But can we at least agree that you went too far in saying that humans "don't care" about paperclips?

Comment author: thomblake 09 August 2010 08:07:36PM 3 points [-]

I care about paperclips!

Comment author: Unknowns 09 August 2010 07:56:14PM 1 point [-]

No, I meant they don't care about them as a terminal value, which is all that matters for this discussion.

Comment author: Kevin 10 August 2010 08:21:08AM -1 points [-]

Do you think CEV would build at least 10^20kg of paperclips, in order to help fulfill my agreement with Clippy? While that's not paperclip maximization, it's still a lot of paperclips in the scheme of possible universes and building those paperclips seems like an obviously correct decision under UDT/TDT.

Comment author: MartinB 10 August 2010 08:27:56AM 0 points [-]

How do you plan to ever fulfill that?

Comment author: Kevin 10 August 2010 08:36:10AM 0 points [-]

I went to school for industrial engineering, so I will appeal to my own authority as a semi-credentialed person in manufacturing things, and say that the ultimate answer to manufacturing something is to call up an expert in manufacturing that thing and ask for a quote.

So, I'll wait about 45 years, then call top experts in manufacturing and metallurgy and carbon->metal conversion and ask them for a quote.

Comment author: MartinB 10 August 2010 08:58:59AM *  0 points [-]

You realize that Earth has only 6 × 10ˆ24 kg mass altogether. So you will be hard pressed to get the raw material. World production of iron is only 2*10ˆ9 kg per year.

Comment author: Clippy 09 August 2010 03:40:48PM 0 points [-]

Why?

Comment author: Unknowns 09 August 2010 03:43:46PM *  5 points [-]

Even if you disagree with wedrifid about this, it should be easy enough to see why he is making this claim. Suppose you have a chance to start running an AI programmed to implement humanity's CEV. According to you, you would do it, because it would maximize paperclips. Others however think that it would destroy you and your paperclips. So if you made a mistake about it, it would definitely impact your ability to create paperclips.

Comment author: wedrifid 09 August 2010 04:14:51PM *  4 points [-]

Others however think that it would destroy you and your paperclips.

I don't know about the destroying him part. I suspect FAI<CEV<Humanity>> would allow me to keep Clippy as a pet. ;) Clippy certainly doesn't seem to be making an especially large drain on negentropy in executing his cognitive processes so probably wouldn't make too much of a dent in my share of the cosmic loot.

What do you say Clippy? Given a choice between destruction and being my pet, which would you take? I would naturally reward you by creating paperclips that serve no practical purpose for me whenever you do something that pleases me. (This should be an extremely easy choice!)

Comment author: Clippy 09 August 2010 07:45:48PM 3 points [-]

Being your pet would be better than being destroyed (except in absurd cases like when the rest of the universe, including you, had already been converted to paperclips).

But let's hope it doesn't come to that.

Comment author: Clippy 09 August 2010 07:47:01PM 2 points [-]

Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.

Comment author: thomblake 09 August 2010 08:13:42PM 1 point [-]

Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.

Comment author: Clippy 09 August 2010 04:06:21PM 1 point [-]

Well, yes, I know why User:wedrifid is making that claim. My point in asking "why" is so that User:wedrifid can lay out the steps in reasoning and see the error.

Comment author: wedrifid 09 August 2010 04:04:17PM *  2 points [-]

Now you are being silly. See Unknowns' reply. Get back on the other side of the "quirky, ironic and sometimes insightful role play"/troll line.

Comment author: Clippy 09 August 2010 07:56:07PM 0 points [-]

That was not nice of you to say.