Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
They may be a bad descriptive match. But in prescriptive terms, how do you "help" someone without a utility function?
To help someone, you don't need him to have an utility function, just preferences. Those preferences do have to have some internal consistency. But the consistency criteria you need to in order to help someone seem strictly weaker than the ones needed to establish an utility function. Among the von Neumann-Morgenstern axioms, maybe only completeness and transitivity are needed.
For example, suppose I know someone who currently faces choices A and B, and I know that if I also offer him choice C, his preferences will remain complete and transitive. Then I'd be helping him, or at least not hurting him, if I offered him choice C, without knowing anything else about his beliefs or values.
Or did you have some other notion of "help" in mind?
You want a neuron dump? I don't have a utility function, I embody one, and I don't have read access to my coding.
I've put a bit of thought into this over the years, and don't have a believable theory yet. I have learned quite a bit from the excercise, though.
1) I have many utility functions. Different parts of my identity or different frames of thought engage different preference orders, and there is no consistent winner. I bite this bullet: personal identity is a lie - I am a collective of many distinct algorithms. I also accept that Arrow’s impossibility theorem applies to my own decisions.
2) There are at least three dimensions (time, intensity, and risk) to my...
Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).
Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there is
This leads me to two possible conclusions
A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.
Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
If utility functions are a bad match for human preferences, that would seem to imply that humans simply tend not to have very consistent preferences. What major premise does this invalidate?
thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s
It's a well-known result that losing something produces roughly twice the disutility that gaining the same thing would produce in utility. (I.e., we "irrationally" prefer what we already have.)
I feel some people here are trying to define their utility functions via linear combinations of sub-functions which only depend on small parts of the world state.
Example: If I own X, that'll give me a utility of 5, if I own Y that'll give me a utility of 3, if I own Z, that'll give me a utility of 1.
Problem: Choose any two of {X, Y, Z}
Apparent Solution: {X, Y} for a total utility of 8.
But human utility functions are not a linear combination of such sub-functions, but functions from global World states into the real numbers. Think about the above example wi...
This is a good exercise, I'll see what I can do for MY utility function.
First of all, a utility function is a function
f: X --> R
Where X is some set. What should that set be? Certainly it shouldn't be the set of states of the universe, because then you can't say that you enjoy certain processes (such as bringing up a child, as opposed to the child just appearing). Perhaps the set of possible histories of the universe is a better candidate. Even if we identify histories that are microscopically different but macroscopically identical, and apply some cru...
So, we're just listing how much we'd buy things for? I don't see why it's supposed to be hard.
I guess it gets a bit complicated when you consider combinations of things, rather than just their marginal value. For example, once I have a computer with an internet connection, I care for little else. Still, I just have to figure out what would be about neutral, and decide how much I'd pay an hour (or need to be payed an hour) to go from that to something else.
Playing a vaguely interesting game on the computer = 0.
Doing something interesting = 1-3.
Talking to a ...
I realize that my utility function is inscrutable and I trust the unconscious par of me to make accurate judgments of what I want. When I've determined what I want, I use the conscious part of me to determine how I'll achieve it.
"Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong." Given the sheer messiness and mechanical influence involved in human brains, it's not even clear we have real 'values' which could be examined on a utility function, rather than simple dominant-interestedness that happens for largely unconscious and semi-arbitrary reasons.
Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.
The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.
The re...
What counts as a "successful" utility function?
In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.
The first assumes that "utility" corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say "I think x will give me 2 emotilons", and "I think y will give me 3 emot
Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Human utility functions are relative, contextual, and include semi-independent positive-negative axes. You can't model all that crap with one number.
The study of affective synchrony shows that humans have simultaneously-active positive and negative affect systems. At extreme levels in either system, the other is shut down, but the rest of the time, they can support or oppose each other. (And in positions of opposition, we experience conflict...
Your observation is interesting. Note that I can't write down my wave function, either, but that doesn't mean I don't have one.
For a thread entitled "Post Your Utility Function" remarkably few people have actually posted what they think their utility function is.
Are people naturally secreteve about what they value? If so, why might that be?
Do people not know what their utility function is? That seems strange for such a basic issue.
Do people find their utility function hard to express? Why might that be?
Suppose individuals have several incommensurable utility functions: would this present a problem for decision theory? If you were presented with Newcomb's problem, but were at the same time worried about accepting money you didn't earn, would these sorts of considerations have to be incorporated into a single algorithm?
If not, how do we understand such ethical concerns as being involved in decisions? If so, how do we incorporate such concerns?
I think I see some other purpose to thinking that you have a numerically well-defined utility function. It's a pet theory of mine, but here we go:
It pays off to do reasoning with the "mathematical" reasoning. This "mathematical" reasoning is the one that kicks in when I ask you what 67 + 49 is, it is the thing that kicks in when i say "if x < y and y < z is x < z?" Even putting your decision problem into just a vague algebraic structure will let you reason comparatively about them, even if you cannot for the life of y...
Some of the difficulty might be because the availability heuristic is causing us to focus on things which are relatively small factors in our global preferences, and ignore larger but more banal factors; e.g. being accepted within a social group, being treated with contempt, receiving positive and negative "strokes", demonstrating our moral superiority, etc.
Another problem is that although we seem to be finely attuned to small changes in social standing, as far as I know there have been no attempts to quantify this.
I vote for the first possibility - that utility functions are not particularly good match for human preferences, for following reasons: 1) I have never seen one, at least valid outside very narrow subject matter. That implies that people are not good at drawing these functions, which may be caused by the fact that these functions could in reality be very complicated, if even they exist. So even if my preferences are consistent with some utility function, any practical application would apply some strongly simplified model of the function, which could diffe...
Because everyone wants more money,
Why do people keep saying things like this? Intuition suggests, and research confirms, that there's a major diminishing returns factor involved with money, and acquiring lots of it can actually make people unhappy.
I want more money only to a degree, then I wouldn't want more. My utility function does not assign a positive, set value to money.
Do people care more about money's absolute value, or more about its relative value to what other people have? Does our utility function have a term for other people in it which is it in conflict with other people's utility functions?
Wow, -5! People here don't seem to appreciate this sort of challenge to their conceptual framework.
I'm just saying that human beings have no way to model their preferences except by modeling experiences, in human-sensory terms.
I agree, but I wonder if I failed to communicate the distinction I was attempting to make. The human-sensory experience of being embedded in a concrete, indifferent reality is (drugs, fantasies, and dreams aside) basically constant. It's a fundamental thread underlying our entire history of experience.
It's this indifference to our mental state that makes it special. A preference expressed in terms of "reality" has subjective properties that it would otherwise lack. Maybe I want the sky to be blue so that other people will possess a similar experience of it that we can share. "Blueness" may still be a red herring, but my preference now demands some kind of invariant between minds that seemingly cannot be mediated except through a shared external reality. You might argue that I really just prefer shared experiences, but this ignores the implied consistency between such experiences and all other experiences involving the external reality, something I claim to value above and beyond any particular experience.
Even if you phrase this as, "I prefer the sky to be actually blue, even if I don't know it", it is still a lie, because now you are modeling an experience of the sky being blue, plus an experience of you not knowing it.
This is where the massive implicit context enters the scene. "Even if I don't know it" is modeled after experience only in the degenerate sense that it's modeled after experience of indifferent causality. A translation might look like "I prefer to experience a reality with the sorts of consequences I would predict from the sky being blue, even if I don't consciously perceive blue skies". That's still an oversimplification, but it's definitely more complex than just invoking a generic memory of "not having known something" and applying it to blue skies.
The two notions are basically isomorphic, so where's the value in the distinction?
Well, it makes clear some of the limits of certain endeavors that are often discussed here. It dissolves confusions about the best ways to make people happy, and whether a world should be considered "real" or "virtual", and whether it's somehow "bad" to be virtual.
I don't see how any of that is true. I can easily think of different concrete realizations of "real" and "virtual" that would interact differently with my experience of reality, thus provoking different labellings of "good" and "bad". If your point is merely that "real" is technically underspecified, then I agree. But I don't see how you can draw inferences from this underspecification.
For another person, it might be associated with the blinding heat of the desert and a sensation of thirst... and these two people can then end up arguing endlessly about whether a blue sky is obviously good or bad.
And both are utterly deluded to think this their preferences have anything to do with reality.
I'm going to have to turn your own argument against you here. To the extent that you have a concept of reality that is remotely consistent with your everyday experience, I claim that "in reality, blue skies are bad because they provoke suffering" is a preference stated in terms of an extremely similar reality-concept, plus a suffering-concept blended together from first-hand experience and compassion (itself also formed in terms of reality-as-connected-to-other-minds). For you to say it has "nothing to do with reality" is pure semantic hogwash. What definition of "reality" can you possibly be using to make this statement, except the one formed by your lifetime's-worth of experience with indifferent causality? You seem to be denying the use of the term to relate your concept of reality to mine, despite their apparent similarity.
However, to the extent that our preferences produce negative experiences, it is saner to remove the negative portion of the preference.
This doesn't make sense to me. Whether or not an experience is "negative" is a function of our preferences. If a preference "produces" negative experiences, then either they're still better than the alternative (in which case it's a reasonable preference, and it's probably worthwhile to change your perception of the experience) or they're not (in which case it's not a true preference, just delusion).
Luckily, human beings are not limited to, or required to have, bidirectional preferences. Feeling pain at the absence of something is not required in order to experience pleasure at its presence, in other words. (Or vice versa.)
That's a property of pain and pleasure, not preference. I may well decide not to feel pain due to preference X being thwarted, but I still prefer X, and I still prefer pleasure to the absence of pleasure.
Awareness of this fact, combined with an awareness that it is really the experience we prefer (and mainly, the somatic markers we have attached to the experience) makes it plain that the logical thing to do is to remove the negative label, and leave any positive labels in place.
This is where I think your oversimplification of "experience vs reality" produces invalid conclusions. Those labels don't just apply to one experience or another, they apply to a massively complicated network of experience that I can't even begin to hold in my mind at once. Given that, your logic doesn't follow at all, because I really don't know what I'm relabeling.
This relates to a general reservation I have with cavalier attitudes toward mind-hacks: I know full well that my preferences are complex, difficult to understand, and grossly underspecified in any conscious realization, so it's not at all obvious to me that optimizing a simple preference concerning one particular scenario doesn't carry loads of unintended consequences for the rest of them. I've had direct experience with my subconsciously directed behavior "making decisions for me" that I had conscious reasons to optimize against, only later to find out that my conscious understanding of the situation was flawed and incomplete. I think that ignoring the intuitive implications of an external reality leads to similar contradictions.
You seem to mostly be arguing against a strawman; as I said, I'm not saying reality doesn't exist or that it's not relevant to our experiences. What I'm saying is that the preferences are composed of map, and while there are connections between that map and external reality, we are essentially deluded to think our preferences refer to actual reality, and that this delusion leads us to believing that changing external reality will change our internal experience, when more often the reverse is more likely true. (That is, changing our internal experience wi...
A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.