A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
It's funny that you talk of wordplay a few comments back, as it seems that you're the one making a technically-correct-but-not-practically-meaningful argument here.
If I may attempt to explore your position: Suppose someone claims a preference for "blue skies". The wirehead version of this that you endorse is "I prefer experiences that include the perception I label 'blue sky'". The "anti-wirehead" version you seem to be arguing against is "I prefer actual states of the world where the sky is actually blue".
You seem to be saying that since the preference is really about the experience of blue skies, it makes no sense to talk about the sky actually being blue. Chasing after external definitions involving photons and atmospheric scattering is beside the point, because the actual preference wasn't formed in terms of them.
This becomes another example of the general rule that it's impossible to form preferences directly about reality, because "reality" is just another label on our subjective map.
As far as specifics go, I think the point you make is sound: Most (all?) of our preferences can't just be about the territory, because they're phrased in terms of things that themselves don't exist in the territory, but at best simply point at the slice of experience labeled "the territory".
That said, I think this perspective grossly downplays the practical importance of that label. It has very distinct subjective features connecting in special ways to other important concepts. For the non-solipsists among us, perhaps the most important role it plays is establishing a connection between our subjective reality and someone else's. We have reason to believe that it mediates experiences we label as "physical interactions" in a manner causally unaffected by our state of mind alone.
When I say "I prefer the galaxy not to be tiled by paperclips", I understand that, technically, the only building blocks I have for that preference are labeled experiences and concepts that aren't themselves the "stuff" of their referents. In fact, I freely admit that I'm not exactly sure what constitutes "the galaxy", but the preference I just expressed actually contains a massive number of implicit references to other concepts that I consider causally connected to it via my "external reality" label. What's more, most people I communicate with can easily access a seemingly similar set of connections to their "external reality" label, assuming they don't talk themselves out of it.
The territory concept plays a similar role to that of an opaque reference in a programming language. Its state may not be invariant, but its identity is. I don't have to know any true facts concerning its actual structure for it to be meaningful and useful. Just as photons aren't explicitly required to subjectively perceive a blue sky, the ontological status of my territory concept doesn't really change its meaning or importance, which is acquired through its intimate connection to massive amounts of raw experience.
Claiming my preferences about the territory are really just about my map is true in the narrow technical sense that it's impossible for me to refer directly to "reality", but doing so completely glosses over the deep, implicit connections expressed by such preferences, most primarily the connection between myself and the things I label "other consciousnesses". In contrast, the perception of these connections seems to come for free by "confusing" the invariant identity of my territory concept with the invariant "existence" of a real external world. The two notions are basically isomorphic, so where's the value in the distinction?
That depends on whether you're talking about "blue" in terms of human experience, or whether you're talking about wavelengths of light. The former is clearly "map", whereas discussing wavelengths of light at least might be considered "about" the territory in some sense.
However, if you are talking about preferences, I don't think there's any way for a preference to escape ... (read more)