ec429 comments on Moral Complexities - Less Wrong

12 Post author: Eliezer_Yudkowsky 04 July 2008 06:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

Sort By: Old

You are viewing a single comment's thread.

Comment author: ec429 19 September 2011 04:15:54AM 0 points [-]

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

Suppose I currently want pie, but there is very little pie and lots of cake. Wouldn't I have more of what-I-want if I could change what I want from pie to cake? Sure, that doesn't get me more pie, but it increases the valuation-of-utility-function.

Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me more pie without changing how much pie everyone else gets? Then I want to change what-I-want to wanting-pie-for-everyone because that increases the valuations of both utility functions.

Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me pie at the expense of everyone else? Now I have no stable solution. I want to change what I want, but I don't want to have changed what I want. My head is about to melt because I am very confused. "Rational agents should WIN" doesn't seem to help when Omega's behaviour depends on my definition of WIN.