A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
And a thermostat's map is also "entangled" with the territory, but as loqi pointed out, what it really prefers is only that its input sensor match its temperature setting!
I am not saying there are no isomorphisms between the shape of our preferences and the shape of reality, I am saying that assuming this isomorphism means the preferences are therefore "about" the territory is mind projection.
If you look at a thermostat, you can project that it was made by an optimizing process that "wanted" it to do certain things by responding to the territory, and that thus, the thermostat's map is "about" the territory. And in the same way, you can look at a human and project that it was made by an optimizing process (evolution) that "wanted" it to do certain thing by responding to the territory.
However, the "aboutness" of the thermostat does not reside in the thermostat; it resides in the maker of the thermostat, if it can be said to exist at all! (In fact, this "aboutness" cannot exist, because it is not a material entity; it's a mental entity - the idea of aboutness.)
So despite the existence of inputs and outputs, both the human and the thermostat do their "preference" calculations inside the closed box of their respective models of the world.
It just so happens that humans' model of the world also includes a Mind Projection device, that causes humans to see intention and purpose everywhere they look. And when they look through this lens at themselves, they imagine that their preferences are about the territory... which then keeps them from noticing various kinds of erroneous reasoning and subgoal stomps.
For that matter, it keeps them from noticing things like the idea that if you practice being a pessimist, nothing good can last for you, because you've trained yourself to find bad things about anything. (And vice versa for optimists.)
Ostensibly, optimism and pessimism are "about" the outside world, but in fact, they're simply mechanical, homeostatic processes very much like a thermostat.
I am not a solipsist nor do I believe people "create your own reality", with respect to the actual territory. What I'm saying is that people are deluded about the degree of isomorphism between their preferences and reality, because they confuse the map with the territory. And even with maximal isomorphism between preference and reality, they are still living in the closed box of their model.
It is reasonable to assume that existence actually exists, but all we can actually reason about is our experience of it, "inside the box".