orthonormal comments on Post Your Utility Function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (273)
Of course, because the immediate pain of the thought of choosing B would outweigh the longer-term lesser pain of the thought of losing contact with your sister.
This has nothing to do with whether the events actually occur, and everything to do with your mapping of the experience of the conditions, as you imagine them for purposes of making a decision.
That is, the model you make of the future may refer to a hypothetical reality, but the thing you actually evaluate is not that reality, but your own reaction to that reality -- your present-tense experience in response to a constructed fiction made of previous experiences
It so happens that there is some correspondence between this (real) process and the way we would prefer to think we establish and evaluate our preferences. Specifically, both models will generate similar results, most of the time. It's just that the reasons we end up with for the responses are quite different.
But calling that latter concept "territory" is still a category error, because what you are using to evaluate it is still your perception of how you would experience the change.
We do not have preferences that are not about experience or our emotional labeling thereof; to the extent that we have "rational" preferences it is because they will ultimately lead to some desired emotion or sensation.
However, our brains are constructed in such a way so as to allow us to plausibly overlook and deny this fact, so that we can be honestly "sincere" in our altruism... specifically by claiming that our responses are "really" about things outside ourselves.
For example, your choice of "A" allows you to self-signal altruism, even if your sister would actually prefer death to being imprisoned on Mars for the rest of her life! Your choice isn't about making her life better, it's about you feeling better for the brief moment that you're aware you did something.
(That is, if you cared about something closer to the reality of what happens to your sister, rather than your experience of it, you'd have hesitated in that choice long enough to ask Omega whether she would prefer death to being imprisoned on Mars.)
Be charitable in your interpretation, and remember the Least Convenient Possible World principle. I was presuming that the setup was such that being alive on Mars wouldn't be a 'fate worse than death' for her; if it were, I'd choose differently. If you prefer, take the same hypothetical but with me on Mars, choosing whether she stayed alive on Earth; or let choice B include subjecting her to an awful fate rather than death.
I would say rather that my reaction is my evaluation of an imagined future world. The essence of many decision algorithms is to model possible futures and compare them to some criteria. In this case, I have complicated unconscious affective criteria for imagined futures (which dovetail well with my affective criteria for states of affairs I directly experience), and my affective reaction generally determines my actions.
To the extent this is true (as in the sense of my previous sentence), it is a tautology. I understand what you're arguing against: the notion that what we actually execute matches a rational consequentialist calculus of our conscious ideals. I am not asserting this; I believe that our affective algorithms do often operate under more selfish and basic criteria, and that they fixate on the most salient possibilities instead of weighing probabilities properly, among other things.
However, these affective algorithms do appear to respond more strongly to certain facets of "how I expect the world to be" than to facets of "how I expect to think the world is" when the two conflict (with an added penalty for the expectation of being deceived), and I don't find that problematic on any level.
As I said, it's still going to be about your experience during the moments until your memory is erased.
I took that as a given, actually. ;-) What I'm really arguing against is the naive self-applied mind projection fallacy that causes people to see themselves as decision-making agents -- i.e., beings with "souls", if you will. Asserting that your preferences are "about" the territory is the same sort of error as saying that the thermostat "wants" it to be a certain temperature. The "wanting" is not in the thermostat, it's in the thermostat's maker.
Of course, it makes for convenient language to say it wants, but we should not confuse this with thinking the thermostat can really "want" anything but for its input and setting to match. And the same goes for humans.
(This is not a mere fine point of tautological philosophy; human preferences in general suffer from high degrees of subgoal stomp, chaotic loops, and other undesirable consequences arising as a direct result of this erroneous projection. Understanding the actual nature of preferences makes it easier to dissolve these confusions.)