A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
And does this alarm system have "preferences" that are "about" reality? Or does it merely generate outputs in response to inputs, according to the "values it implements"?
My argument is simply that humans are no different than this hypothetical alarm system; the things we call preferences are no different than variables in the alarm system's controller - an implementation of values that are not our own.
If there are any "preferences about reality" in the system, they belong to the maker of the alarm system, as it is merely an implementation of the maker's values.
By analogy, if our preferences are the implementation of any values, they are the "values" of natural selection, not our own.
If now you say that natural selection doesn't have any preferences or values, then we are left with no preferences anywhere -- merely isomorphism between control systems and their environments. Saying this isomorphism is "about" something is saying that a mental entity (the "about" relationship) exists in the real world, i.e., supernaturalism.
In short, what I'm saying is that anybody who argues human preferences are "about" reality is anthropomorphizing the alarm system.
However, if you say that the alarm system does have preferences by some reductionistic definition of "preference", and you assert that human preference is exactly the same, then we are still left to determine the manner in which these preferences are "about" reality.
If nobody made the alarm system, but it just happened to be formed by a spontaneous jumbling of parts, can it still be said to have preferences? Are its "preferences" still "about" reality in that case?
Both. You are now trying to explain away the rainbow, by insisting that it consists of atoms, which can't in themselves possess the properties of a rainbow.