WrongBot comments on Some Thoughts Are Too Dangerous For Brains to Think - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (311)
Not to evoke a recursive nightmare, but some utility function alterations appear to be strictly desirable.
As an obvious example, if I were on a diet and I could rewrite my utility function such that the utilities assigned to consuming spinach and cheesecake were swapped, I see no harm in making that edit. One could argue that my second-order utility (and all higher) function should be collapsed into my first-order one, such that this would not really change my meta-utility function, but this issue just highlights the futility of trying to cram my complex, conflicting, and oft-inconsistent desires into a utility function.
This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.
I wouldn't claim that any human is actually able to describe their own utility function; they're much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting.
I also think that there's some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don't want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned.
As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn't a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences.
Only if your current utility function is "maximize expected utility." (It isn't.)