Clippy comments on Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips - Less Wrong

7 Post author: Kevin 22 July 2010 10:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Clippy 22 July 2010 07:09:01PM 3 points [-]

Optimizing the amount of paperclips in the universe, obviously. But I wouldn't take the offer that User:Tenek made, because that gain in paperclip cardinality would be more-than-offset by the fact that all my future actions would be under the control of a decision theory that puts woefully insufficient priority on creating paperclips.

Comment author: JGWeissman 22 July 2010 07:14:00PM 0 points [-]

But what if this decision theory uses a utility function whose only terminal value is paperclips?

Comment author: cousin_it 22 July 2010 07:16:46PM *  5 points [-]

Clippy's original expression of outrage over the offensive title of the article would be quite justified under such a decision theory for signaling reasons. If Clippy is to deal with humans, exhibiting "human weaknesses" may benefit him. In the only AI-box spoiler ever published, an unfriendly AI faked a human weakness to successfully escape. So you all are giving Clippy way too little credit, it's been acting very smartly so far.

Comment author: timtyler 22 July 2010 08:02:53PM *  0 points [-]

I think that was probably an actor or actress, who was pretending.

Comment author: JGWeissman 22 July 2010 07:24:57PM 0 points [-]

My comment was not about Clippy's original expression of outrage. It was about Clippy's concern about not "truly caring about paperclips".