Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
They may be a bad descriptive match. But in prescriptive terms, how do you "help" someone without a utility function?
You want a neuron dump? I don't have a utility function, I embody one, and I don't have read access to my coding.
I've put a bit of thought into this over the years, and don't have a believable theory yet. I have learned quite a bit from the excercise, though.
1) I have many utility functions. Different parts of my identity or different frames of thought engage different preference orders, and there is no consistent winner. I bite this bullet: personal identity is a lie - I am a collective of many distinct algorithms. I also accept that Arrow’s impossibility theorem applies to my own decisions.
2) There are at least three dimensions (time, intensity, and risk) to my...
Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).
Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there is
This leads me to two possible conclusions
A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.
Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
If utility functions are a bad match for human preferences, that would seem to imply that humans simply tend not to have very consistent preferences. What major premise does this invalidate?
thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s
It's a well-known result that losing something produces roughly twice the disutility that gaining the same thing would produce in utility. (I.e., we "irrationally" prefer what we already have.)
I feel some people here are trying to define their utility functions via linear combinations of sub-functions which only depend on small parts of the world state.
Example: If I own X, that'll give me a utility of 5, if I own Y that'll give me a utility of 3, if I own Z, that'll give me a utility of 1.
Problem: Choose any two of {X, Y, Z}
Apparent Solution: {X, Y} for a total utility of 8.
But human utility functions are not a linear combination of such sub-functions, but functions from global World states into the real numbers. Think about the above example wi...
This is a good exercise, I'll see what I can do for MY utility function.
First of all, a utility function is a function
f: X --> R
Where X is some set. What should that set be? Certainly it shouldn't be the set of states of the universe, because then you can't say that you enjoy certain processes (such as bringing up a child, as opposed to the child just appearing). Perhaps the set of possible histories of the universe is a better candidate. Even if we identify histories that are microscopically different but macroscopically identical, and apply some cru...
So, we're just listing how much we'd buy things for? I don't see why it's supposed to be hard.
I guess it gets a bit complicated when you consider combinations of things, rather than just their marginal value. For example, once I have a computer with an internet connection, I care for little else. Still, I just have to figure out what would be about neutral, and decide how much I'd pay an hour (or need to be payed an hour) to go from that to something else.
Playing a vaguely interesting game on the computer = 0.
Doing something interesting = 1-3.
Talking to a ...
I realize that my utility function is inscrutable and I trust the unconscious par of me to make accurate judgments of what I want. When I've determined what I want, I use the conscious part of me to determine how I'll achieve it.
"Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong." Given the sheer messiness and mechanical influence involved in human brains, it's not even clear we have real 'values' which could be examined on a utility function, rather than simple dominant-interestedness that happens for largely unconscious and semi-arbitrary reasons.
Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.
The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.
The re...
What counts as a "successful" utility function?
In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.
The first assumes that "utility" corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say "I think x will give me 2 emotilons", and "I think y will give me 3 emot
Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Human utility functions are relative, contextual, and include semi-independent positive-negative axes. You can't model all that crap with one number.
The study of affective synchrony shows that humans have simultaneously-active positive and negative affect systems. At extreme levels in either system, the other is shut down, but the rest of the time, they can support or oppose each other. (And in positions of opposition, we experience conflict...
Your observation is interesting. Note that I can't write down my wave function, either, but that doesn't mean I don't have one.
For a thread entitled "Post Your Utility Function" remarkably few people have actually posted what they think their utility function is.
Are people naturally secreteve about what they value? If so, why might that be?
Do people not know what their utility function is? That seems strange for such a basic issue.
Do people find their utility function hard to express? Why might that be?
Suppose individuals have several incommensurable utility functions: would this present a problem for decision theory? If you were presented with Newcomb's problem, but were at the same time worried about accepting money you didn't earn, would these sorts of considerations have to be incorporated into a single algorithm?
If not, how do we understand such ethical concerns as being involved in decisions? If so, how do we incorporate such concerns?
I think I see some other purpose to thinking that you have a numerically well-defined utility function. It's a pet theory of mine, but here we go:
It pays off to do reasoning with the "mathematical" reasoning. This "mathematical" reasoning is the one that kicks in when I ask you what 67 + 49 is, it is the thing that kicks in when i say "if x < y and y < z is x < z?" Even putting your decision problem into just a vague algebraic structure will let you reason comparatively about them, even if you cannot for the life of y...
Some of the difficulty might be because the availability heuristic is causing us to focus on things which are relatively small factors in our global preferences, and ignore larger but more banal factors; e.g. being accepted within a social group, being treated with contempt, receiving positive and negative "strokes", demonstrating our moral superiority, etc.
Another problem is that although we seem to be finely attuned to small changes in social standing, as far as I know there have been no attempts to quantify this.
I vote for the first possibility - that utility functions are not particularly good match for human preferences, for following reasons: 1) I have never seen one, at least valid outside very narrow subject matter. That implies that people are not good at drawing these functions, which may be caused by the fact that these functions could in reality be very complicated, if even they exist. So even if my preferences are consistent with some utility function, any practical application would apply some strongly simplified model of the function, which could diffe...
Because everyone wants more money,
Why do people keep saying things like this? Intuition suggests, and research confirms, that there's a major diminishing returns factor involved with money, and acquiring lots of it can actually make people unhappy.
I want more money only to a degree, then I wouldn't want more. My utility function does not assign a positive, set value to money.
Do people care more about money's absolute value, or more about its relative value to what other people have? Does our utility function have a term for other people in it which is it in conflict with other people's utility functions?
A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.