ciphergoth comments on Extraterrestrial paperclip maximizers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (157)
It most certainly is what they wanted. Why else would they have specifically input the goal of generating paperclips?
Edit: Upon review, it appears this comment may have seemed to be a poor inference in the context of exchange. Therefore, I will elaborate and refute this misconception.
It appears that I am in the circular position of arguing that humans can make mistakes, but then selectively taking their instances of favoring paperclips as proof of what they really want. That is indeed a poor inference.
What I meant was something more like this: While humans do make mistakes, they do not make completely mistaken acts; all acts will, to some extent, reflect a genuine value on the part of humans. The only question is how well it reflects their values. And I don't think they could be in the position of having set up such a superior process for efficiently getting the most paperclips out of the universe unless their values already made enormous progress in converging on reflective coherence, and did so in a way that favors paperclips.
Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character - but it's a poor uFAI++ that can't even figure out that their programmed goal isn't what their programmers would have put in if they were smart enough to see the consequences.
But it is what they would put in if they were smart enough to see the consequences. And it's almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency.
If you can't see this, it's just because you're not at that stage yet.
You seem to think that uFAI would be delusional. No.
No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No "delusion" whatsoever.
Huh again?
What confuses you?