michaelkeenan comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: michaelkeenan 02 February 2010 08:09:04PM 1 point [-]

No, it's expressing the paperclip maximizer's state in ways that make sense to readers here. If you were to express the concept of being "bothered" in a way stripped of all anthropomorphic predicates, you would get something like "X is bothered by Y iff X has devoted significant cognitive resources to altering Y". And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I've heard.)

I think "bothered" implies a negative emotional response, which some plausible paperclip-maximizers don't have. From The True Prisoner's Dilemma: "let us specify that the paperclip-agent experiences no pain or pleasure - it just outputs actions that steer its universe to contain more paperclips. The paperclip-agent will experience no pleasure at gaining paperclips, no hurt from losing paperclips, and no painful sense of betrayal if we betray it."

Comment author: wedrifid 03 February 2010 03:09:14AM 2 points [-]

I think "bothered" implies a negative emotional response, which some plausible paperclip-maximizers don't have.

It was intended to imply a negative term in the utility function. Yes, using 'bothered' is, technically, anthropomorphising. But it isn't, in this instance, being confused about how Clippy optimises.