A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
That preference is not universal, which to me makes it absolutely part of the map. And it's not just the fictional evidence of Cypher wanting to go back in the Matrix and forget, guys routinely pay women for various forms of fantasy fulfillment, willingly suspending disbelief in order to be deceived.
Not enough? How about the experimental philosophers who re-ran the virtual world thought experiment until they found that people's decision about living in a fantasy world that they'd think was real, was heavily dependent upon whether they 1) had already been living in the fantasy, 2) whether their experience of life would significantly change, and 3) whether their friends and loved ones were also in the fantasy world.
If anything, those stats should be quite convincing that it's philosophers and extreme rationalists who have a pathological fear of deception, rather than a inbuilt human preference for actually knowing the truth... and that most likely, if we have an inbuilt preference against deception, it's probably aimed at obtaining social consensus rather than finding truth.
All that having been said, I will concede that there perhaps you could find some irreducible microkernel of "map" that actually corresponds to "territory". I just don't think it makes sense (on the understanding-people side) to worry about it. If you're trying to understand what people want or how they'll behave, the territory is absolutely the LAST place you should be looking. (Since the distinctions they're using, and the meanings they attach to those distinctions, are 100% in the map.)
I don't see how it supposed to follow from the fact that not everyone prefers not-being-decieved, that those who claim to prefer not-being-deceived must be wrong about their own preferences. Could you explain why you seem to think it does?
The claim others are defending here (as I understand it) is not that everyone's preferences are really over the territory; merely that some people's are. Pointing out that some people's preferences aren't about the territory isn't a counterargument to that claim.