A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
My point is that your entire argument consists of pointing to the map and claiming it's the territory. In the cases where reality and your belief conflict, you won't know that's the case. Your behavior will be exactly the same, either way, so the distinction is moot.
When you are trying to imagine, "my spouse is cheating and I think she isn't", you aren't imagining that situation... you are actually imagining yourself perceiving that to be the case. That is, your map contains the idea of being deceived, and that this is an example of being deceived, and it is therefore bad.
None of that had anything to do with the reality over which you claim to be expressing a preference, because if it were the reality, you would not know you were being deceived.
This is just one neat little example of systemic bias in the systems we use to represent and reflect on preferences. They are designed to react to perceived circumstances, rather than to produce consistent reasoning about how things ought to be. So if you ever imagine that they are "about" reality, outside the relatively-narrow range of the here-and-now moment, you are on the path to error.
And just as errors accumulate in Newtonian physics as you approach the speed of light, so too do reasoning errors tend to accumulate as you turn your reasoning towards (abstract) self-reflection.
Sure.
No. My imagination encompasses the fact that if it w... (read more)