XiXiDu comments on Pluralistic Moral Reductionism - Less Wrong

33 Post author: lukeprog 01 June 2011 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 06 June 2011 11:35:18AM 0 points [-]

What morality unit would it have other than human's volition?

I am not sure what a 'morality unit' is supposed to be or how it would be different from a volition unit. Either morality is part of our volition, instrumental or an imperative. In each case one could ask what we want and arrive at morality.

Comment author: Will_Sawin 06 June 2011 03:51:09PM 3 points [-]

What I'm saying is that: If Clippy tried to calculate our volition, he would conclude that our volition is immoral. (Probably. Maybe our volition IS paperclips.)

But if we programmed an AI to calculate our volition and use that as its volition, and our morality as its morality, and so on, then it would not find our volition immoral unless we find our volition immoral, which seems unlikely.

Comment author: Peterdjones 06 June 2011 04:04:55PM 1 point [-]

An AI that was smarter than us might deduce that we were not applying the Deep Structure of our morality properly because of bias or limited intelligence. It might conclude that human morality requires humans to greatly reduce their numbers in order to lessen the impact on other species, for instance.