timtyler comments on Applying utility functions to humans considered harmful - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (114)
I personally feel happy or sad about the present state of affairs, including expectation of future events ("Oh no, my parachute won't deploy! I sure am going to hit the ground fast."). I can call how satisfied I am with the current state of things as I perceive it "utility". Of course, by using that word, it's usually assumed that my preferences obey some axioms, e.g. von Neumann-Morgenstern, which I doubt your wrapping satisfies in any meaningful way.
Perhaps there's some retrospective sense in which I'd talk about the true utility of the actual situation at the time (in hindsight I have a more accurate understanding of how things really were and what the consequences for me would be), but as for my current assessment it is indeed entirely a function of my present mental state (including perceptions and beliefs about the state of the universe salient to me). I think we agree on that.
I'm still not entirely sure I understand the wrapping you described. It feels like it's too simple to be used for anything.
Perhaps it's this: given the life story of some individual (call her Ray), you can vacuously (in hindsight) model her decisions with the following story:
1) Ray always acts so that the immediately resulting state of things has the highest expected utility. Ray can be thought of as moving through time and having a utility at each time, which must include some factor for her expectation of her future e.g. health, wealth, etc.
2) Ray is very stupid and forms some arbitrary belief about the result of her actions, expecting with 100% confidence that the predicted future of her life will come to pass. Her expectation in the next moment will usually turn out to revise many things she previously wrongly expected with certainty, i.e. she's not actually predicting the future exactly.
3) Whatever Ray believed the outcome would be at each choice, she assigned utility 1. To all other possibilities she assigned utility 0.
That's the sort of fully-described scenario that your proposal evoked in me. If you want to explain how she's forecasting more than singleton expectation set, and yet the expected utility for each decision she takes magically works out to be 1, I'd enjoy that.
In other words, I don't see any point modeling intelligent yet not omniscient+deterministic decision making unless the utility at a given state includes an anticipation of expectation of future states.
I certanly did not intend any such implication. Which set of axioms is using the word "utility" supposed to imply?
Perhaps check with the definition of "utility". It means something like "goodness" or "value". There isn't an obvious implication of any specific set of axioms.