conchis comments on Utilons vs. Hedons - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
This discussion has made me feel I don't understand what "utilon" really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?
"Whatever we maximize"? But we're not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn't something we consciously want.
"Whatever we self-report as maximizing"? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.
"If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility." That's a definition, yes, but it doesn't really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people's preferences for the future?
Also a note on the post:
That definition feels too broad to me. Typically akrasia has two further atttributes:
Improper time discounting: we don't spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.
Feeling so bad due to not doing the necessary task that we don't really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience - but we just can't get started!
I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:
Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.
I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values - in either the self-regarding or impartial sense of the term.
"Values" are still a bit of a black box here though, and it's not entirely clear how to cash them out. I don't think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.
What's the difference between 1 and 2? If we're being selfish then surely we just want to experience the most pleasurable emotional states. I would read "values" as an individual strategy for achieving this. Then, being unselfish is valuing the emotional states of everyone equally... ...so long as they are capable of experiencing equally pleasurable emotions, which may be untestable.
Note: just re-read OP, and I'm thinking about integrating over instantaneous hedons/utilons in time and then maximising the integral, which it didn't seem like the OP did.
We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.