A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
The confusion that you have here is that kin altruism is only "about" your relatives from the outside of you. Within the map that you have, you have no such thing as "kin altruism", any more than a thermostat's map contains "temperature regulation". You have features that execute to produce kin altruism, as a thermostat's features produce temperature regulation. However, just as a thermostat simply tries to make its sensor match its setting, so too do your preferences simply try to keep your "sensors" within a desired range.
This is true regardless of the evolutionary, signaling, functional, or other assumed "purposes" of your preferences, because the reality in which those other concepts exist, is not contained within the system those preferences operate in. It is a self-applied mind projection fallacy to think otherwise, for reasons that have been done utterly to death in my interactions with Vladimir Nesov in this thread. If you follow that logic, you'll see how preferences, aboutness, and "natural categories" can be completely reduced to illusions of the mind projection fallacy upon close examination.
Well, if this is just a disagreement over whether our typical uses of the word "about" are justified, then I'm satisfied with letting go of this thread; is that the case, or do you think there is a disagreement on our expectations for specific human thoughts and actions?
I suggest, by the way, that your novel backwards application of the Mind Projection Fallacy needs its own name so as not to get it confused with the usual one. (Eliezer's MPF denotes the problem with exporting our mental/intentional concepts outside the sphere of human beings; you seem to be asserting that we imported the notion of preferences from the external world in the first place.)