cousin_it comments on Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (43)
Er, I think you just blew your pretense. Paperclip maximizers care about paperclips, they don't use phrases like "horrible robot that doesn't truly care", they'd be happy to have a universe containing nothing sentient and lots of paperclips.
Or they would be, if they ever bothered to experience happiness, I mean. As opposed to just outputting the action that leads to the most expected paperclips. Hence the term, "expected paperclip maximizer". Don't think of it as having a little ghost inside that maximizes paperclips, think of it as a ghostless device that maximizes paperclips.
Thinking about this gave me the scary idea of the day: Clippy might be a human upload with a tweaked utility function.
If that is the case, what is Clippy's moral status?
If the other parts that make him human aren't modified, I feel as much empathy as I would toward a drug addict.