A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
Well, this discussion might not be useful to either of us at this point, but I'll give it one last go. My reason for bringing in talk of signaling is that throughout this conversation, it seems like one of the claims you have been making is that
Now, I brought up signaling because I and many others already accept a form of (A), in which we've evolved to deceive others and ourselves about our real priorities because such signalers appear to others to be better potential friends, lovers, etc. It looks perfectly meaningful to me to declare such preferences "illusory", since in point of fact we find rationalizations for choosing not what we signaled we prefer, but rather the least costly available signs of these 'preferences'.
However, kin altruism appears to be a clear case where not all action is signaling, where making decisions that are optimized to actually benefit my relatives confers an advantage in total fitness to my genes.
While my awareness and my decisions exist on separate tracks, my decisions seem to come out as they would for a certain preference relation, one of whose attributes is a concern for my relatives' welfare. Less concern, of course, than I consciously think I have for them; but roughly the right amount of concern for Hamilton's Rule of kin selection.
My understanding, then, is that I have both conscious and real preferences; the former are what I directly feel, but the latter determine parts of my action and are partially revealed by analysis of how I act. (One component of my real preferences is social, and even includes the preference to keep signaling my conscious preferences to myself and others when it doesn't cost me too much; this at least gives my conscious preferences some role in my actions.) If my actions predictably come out in accordance with the choices of an actual preference relation, then the term "preference" has to be applied there if it's applied anywhere.
There's still the key functional sense in which my anticipation of future world-states (and not just my anticipation of future mind-states) enters into my real preferences; I feel an emotional response now about the possibility of my sister dying and me never knowing, because that is the form that evaluation of that imagined world takes. Furthermore, the reason I feel that emotional response in that situation is because it confers an advantage to have one's real preferences more finely tuned to "model of the future world" than "model of the future mind", because that leads to decisions that actually help when I need to help.
This is what I mean by having my real preferences sometimes care about the state of the future world (as modeled by my present mind) rather than just my future experience (ditto). Do you disagree on a functional level; and if so, in what situation do you predict a person would feel or act differently than I'd predict? If our disagreement is just about what sort of language is helpful or misleading when taking about the mind, then I'd be relieved.
The confusion that you have here is that kin altruism is only "about" your relatives from the outside of you. Within the map that you have, you have no such thing as "kin altruism", any more than a thermostat's map contains "temperature regulation". You have features that execute to produce kin altruism, as a thermostat's features produce temperature regulation. However, just as a thermostat simply tries to make its sensor match its setting, so too do your preferences simply try to keep your "sensors" within a des... (read more)