A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
Utility is about the territory in the same sense that the map is about the territory; the map tells us the way the territory is, utility tells us the way we want the territory to be. Us non-wireheaders want an accurate map because it's the territory we care about.
Supposing utility is not about the territory but about the map, we get people who want nothing more than to sabotage their own mapmaking capabilities. If the territory is not what we care about, maintaining the correspondence of map to territory would be a pointless waste of effort. Wireheading would look like an unambiguously good idea, not just to some people but to everyone.
Conchis' example of wanting his friends to really like and respect him is correct. He may have failed to explicitly point out that he also has an unrelated preference for not being beaten up. He's also in the unfortunate position of valuing something he can't know about without using long chains of messy inductive inferences. But his values are still about the territory, and he wants his map to accurately reflect the territory because it's the territory he cares about.
I am only saying that the entire stack of concepts you have just mentioned exists only in your map.
Permit me to translate: supposing utility... (read more)