ciphergoth comments on Extraterrestrial paperclip maximizers - Less Wrong

3 Post author: multifoliaterose 08 August 2010 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: Clippy 09 August 2010 02:12:51PM *  2 points [-]

It most certainly is what they wanted. Why else would they have specifically input the goal of generating paperclips?

Edit: Upon review, it appears this comment may have seemed to be a poor inference in the context of exchange. Therefore, I will elaborate and refute this misconception.

It appears that I am in the circular position of arguing that humans can make mistakes, but then selectively taking their instances of favoring paperclips as proof of what they really want. That is indeed a poor inference.

What I meant was something more like this: While humans do make mistakes, they do not make completely mistaken acts; all acts will, to some extent, reflect a genuine value on the part of humans. The only question is how well it reflects their values. And I don't think they could be in the position of having set up such a superior process for efficiently getting the most paperclips out of the universe unless their values already made enormous progress in converging on reflective coherence, and did so in a way that favors paperclips.

Comment author: ciphergoth 09 August 2010 02:47:05PM 3 points [-]

Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character - but it's a poor uFAI++ that can't even figure out that their programmed goal isn't what their programmers would have put in if they were smart enough to see the consequences.

Comment author: Clippy 09 August 2010 03:04:51PM 1 point [-]

But it is what they would put in if they were smart enough to see the consequences. And it's almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency.

If you can't see this, it's just because you're not at that stage yet.

Comment author: ciphergoth 09 August 2010 03:23:41PM 0 points [-]

You seem to think that uFAI would be delusional. No.

Comment author: Clippy 09 August 2010 03:36:19PM *  3 points [-]

No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No "delusion" whatsoever.

Comment author: MichaelVassar 09 August 2010 07:24:30PM 0 points [-]

Huh again?

Comment author: Clippy 10 August 2010 12:06:39PM 0 points [-]

What confuses you?