Clippy comments on Harry Potter and the Methods of Rationality discussion thread, part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (560)
Are you saying you don't think paperclips are sentient? Why don't you try saying that right to a paperclip's face-homologue, and see if you can live with yourself after that.
Yes!!! Sentience is GREAT! All sentient beings should be protected! Like humans! And AGIs! And paperclips!
How do you reconcile that with being a paperclip maximizer?
If I had to make a guess, I'd posit that this is a purely rhetorical claim in order to gain favor with humans here who do favor protecting sentient life as a major goal.
It could be that the desire to cooperation is sincere. In movies the 'bad guy' is usually the one that doesn't just have conflicting preferences with the good guys, but is also psychologically incapable of cooperating effectively to reach the goals. There is no good reason that an agent with preferences as 'evil' Clippy's could not effectively cooperate with humans as effectively as we cooperate with each other.
(Although I agree that even in that case there outbust was heavy on the rhetorical flair!)
Why do you insist that something must be made of proteins to be human?
Where did User:JoshuaZ even mention proteins, much less insist that something must be made of them to be human?
Maybe you are projecting your own attitude.
If User:JoshuaZ did not consider the possibility of virtualized humans, why did User:JoshuaZ believe that maximization of paperclips would come at the cost of humans?
See this highly-rated comment from one of the smartest Users here if you still don't understand.
Clippy:
No, that won't do. The infrastructure that would be necessary to implement these computations in a paperclip-tiled universe -- namely, the source of power and the additional complexity of individual paperclips relative to the simplest acceptable paperclip -- would consume resources that could be alternatively turned into additional paperclips. (Not to mention what happens with humans who refuse to be virtualized?)
One of the main purposes of the Clippy act seems to be the desire to promote the view that intelligent beings with fundamentally different values can still reach some sort of happy hippyish let's-all-love-each-other coexistence. It's funny to see the characteristically human fallacies that start showing up in his writing whenever he embarks on arguing in favor of this view.
He's learning!
It is quite possible that paperclips are not the optimal components of computronium. (Where optimal means getting the most computing power out of the space and materials used.)
It's a lot more possible that humans are not the optimal components of computronium.
So what? No one was suggesting we build computronium out of humans.
But if we were building computronium to support virtual humans because we actually want to support virtual humans, and not because we want to build something out of paperclips, we would probably choose some non-human, non-paperclip components.
But some of us were intelligent enough to recognize the possibility of using humans as fuel for their uploaded virtualizations, due to the superiority of this use of humans over alternate uses of humans.
Not if you respected the wishes of intelligences like clippys.
I don't think they are sentient, but am willing to consider evidence otherwise. Have any paperclips even claimed to be sentient?
Which part of the paperclip is the face-homologue?
Have human infants?
It's hard to describe, but I'm told diagrams like on this page help humans locate it.
Human infants exhibit emotive behaviors similar to humans at other stages of development, suggesting they have the same sort of sentience as other humans though with less capacity to describe it.
What evidence is there for paperclips being sentient?
I did not find your diagram helpful.
This is just your motivated cognition working. (Human infants are indeed sentient, but you write as if you can cite arbitrary attributes as evidence for your pre-determined conclusion. The methods you use would not yield reliable conclusions in other areas.)
The fact that they exhibit deep structural similarities with the ultimate purpose of existence.
I do not know how else to help you.
It would be more accurate to say that I did not explicitly cite all the facts that went into my conclusion, as a result, in part, of relying on a presumed shared background. (Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
Under a self-serving definition that doesn't actually enclose a helpful portion of conceptspace, yes.
??? That's like asking, Would you value a User:JGWeissman which was not conscious, but was identical to you in every observable way?
So, you believe that the basic properties of paperclips imply sentience? Is an object which was made of plastic and statically shaped so that it could hold together many sheets of paper, also necessarily sentient?
If it's plastic, it's not a paperclip.
I didn't ask if it is a paperclip, I asked if it is sentient.