cousin_it comments on Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips - Less Wrong

7 Post author: Kevin 22 July 2010 10:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 22 July 2010 09:11:43PM 6 points [-]

But then, I don't want to become some horrible robot that doesn't truly care about paperclips.

Er, I think you just blew your pretense. Paperclip maximizers care about paperclips, they don't use phrases like "horrible robot that doesn't truly care", they'd be happy to have a universe containing nothing sentient and lots of paperclips.

Or they would be, if they ever bothered to experience happiness, I mean. As opposed to just outputting the action that leads to the most expected paperclips. Hence the term, "expected paperclip maximizer". Don't think of it as having a little ghost inside that maximizes paperclips, think of it as a ghostless device that maximizes paperclips.

Comment author: cousin_it 22 July 2010 09:24:05PM 2 points [-]

Thinking about this gave me the scary idea of the day: Clippy might be a human upload with a tweaked utility function.

Comment author: Nisan 23 July 2010 01:44:55PM 1 point [-]

If that is the case, what is Clippy's moral status?

Comment author: cousin_it 01 August 2010 08:02:39AM 2 points [-]

If the other parts that make him human aren't modified, I feel as much empathy as I would toward a drug addict.