pjeby comments on Utilons vs. Hedons - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
This discussion has made me feel I don't understand what "utilon" really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?
"Whatever we maximize"? But we're not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn't something we consciously want.
"Whatever we self-report as maximizing"? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.
"If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility." That's a definition, yes, but it doesn't really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people's preferences for the future?
Also a note on the post:
That definition feels too broad to me. Typically akrasia has two further atttributes:
Improper time discounting: we don't spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.
Feeling so bad due to not doing the necessary task that we don't really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience - but we just can't get started!
Which is why it's pretty blatantly obvious that humans aren't utility maximizers on our native hardware. We're not even contextual utility maximizers; we're state-dependent error minimizers, where what errors we're trying to minimize are based heavily on short-term priming and longer-term time-decayed perceptual averages like "how much relaxation time I've had" or "how much i've gotten done lately".
Consciously and rationally, we can argue we ought to maximize utility, but our behavior and emotions are still controlled by the error-minimizing hardware, to the extent that it motivates all sorts of bizarre rationalizations about utility, trying to force the consciously-appealing idea of utility maximization to contort itself enough to not too badly violate our error-minimizing intuitions. (That is, if we weren't error-minimizers, we wouldn't feel the need to reduce the difference between our intuitive notions of morality, etc. and our more "logical" inclinations.)
Then, can you tell me what utility is? What is it that I ought to maximize? (As I expanded on in my toplevel comment)
Something that people argue they ought to maximize, but have trouble precisely defining. ;-)