Peterdjones comments on Pluralistic Moral Reductionism - Less Wrong

33 Post author: lukeprog 01 June 2011 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 07 June 2011 10:18:16AM 2 points [-]

, besides pointing out that its current actions are suboptimal for its goal in the long run?

that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?

Another way to put it is that the rationality of killing sprees is dependent on the agent's values. I haven't read much of this site, but I'm getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.

Like so much material on this site, that tacitly assumes values cannot be reasoned about.