A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
Of course they could. And they would not get as good results from either an experiential or practical perspective as the person who explicitly committed to actual, concrete results, for the reasons previously explained.
The brain makes happen what you decide to have happen, at the level of abstraction you specify. If you decide in the abstract to be a good person, you will only be a good person in the abstract.
In the same way, if you "precommit to reflective consistency", then reflective consistency is all that you will get.
It is more useful to commit to obtaining specific, concrete, desired results, since you will then obtain specific, concrete assistance from your brain for achieving those results, rather than merely abstract, general assistance.
Edit to add: In particular, note that a precommitment to reflective consistency does not rule out the possibility of one's exercising selective attention and rationalization as to which calculations to perform or observe. This sort of "commit to being a certain kind of person" thing tends to produce hypocrisy in practice, when used in the abstract. So much so, in fact, that it seems to be an "intentionally" evolved mechanism for self-deception and hypocrisy. (Which is why I consider it a particularly heinous form of error to try to use it to escape the need for concrete commitments -- the only thing I know of that saves one from hypocrisy!)
I can't understand you.