I think we should stop talking about utility functions.
In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and which actively makes them worse at reasoning about ethics. To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.
The funny part is that the failure mode I worry the most about is already an entrenched part of the Sequences: it's fake utility functions. The soft failure is people who think they know what their utility function is and say bizarre things about what this implies that they, or perhaps all people, ought to do. The hard failure is people who think they know what their utility function is and then do bizarre things. I hope the hard failure is not very common.
It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.
The biggest problematic unstated assumption behind applying VNM-rationality to humans, I think, is the assumption that we're actually trying to maximize something.
To elaborate, the VNM theorem defines preferences by the axiom of completeness, which states that for any two lotteries A and B, one of the following holds: A is preferred to B, B is preferred to A, or one is indifferent between them.
So basically, a “preference” as defined by the axioms is a function that (given the state of the agent and the state of the world in general) outputs an agent’s decision between two or more choices. Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?
By itself, it's not. It simply establishes that the function that outputs the agent’s actions behaves differently in different situations. Now the normal way to establish that this is bad is to assume that all choices are between monetary payouts, and that an agent with inconsistent preferences can be Dutch Booked and made to lose money. An alternative way, which doesn't require us to assume that all the choices are between monetary payouts, is to construct a series of trades between resources that leaves us with less resources than when we started.
Stated that way, this sounds kinda bad. But then there are things that kind of fit that description, but which we would intuitively think of as good. For instance, some time back I asked:
In response, I was told that
But then I asked that, if we accept this, then what real-life situation does count as an actual circular preference in the VNM sense, given that just about every potential circularity that I can think of is the kind "I prefer A to B at time t1 and B to A at time t2"? And I didn't get very satisfactory replies.
Intuitively, there are a lot of real-life situations that feel kind of like losing out due to inconsistent preferences, like someone who wants to get into a relationship when he's single and then wants to be single when he gets into a relationship, but there our actual problem is that the person spends a lot of time being unhappy, rather than with the fact that he makes different choices in different situations. Whereas with the couple, we think that's fine because they get enjoyment from the "trades".
The general problem that I'm trying to get at is that in order to hold up VNM rationality as a normative standard, we would need to have a meta-preference: a preference over preferences, stating that it would be better to have preferences that lead to some particular outcomes. The standard Dutch Book example kind of smuggles in that assumption by the way that it talks about money, and thus makes us think that we are in a situation where we are only trying to maximize money and care about nothing else. And if you really are trying to only maximize a single concrete variable or resource and care about nothing else, then you really should try to make sure that your choices follow the VNM axioms. If you run a betting office, then do make sure that nobody can Dutch Book you.
But we don't have such a clear normative standard for life in general. It would be reasonable to try to construct an argument for why the couple having sex were rational but the person who kept vacillating about being in a relationship was irrational by suggesting that the couple got happiness whereas the other person was unhappy... but we also care about other things than just happiness (or pleasure) and thus aren't optimizing just for pleasure either. And unless you're a hedonistic utilitarian, you're unlikely to say that we should optimize only for pleasure either.
So basically, if you want to say that people should be VNM-rational, then you need to have some specific set of values or goals that you think people should strive towards. If you don't have that, then VNM-rationality is basically irrelevant aside for the small set of special cases where people really do have a clear explicit goal that's valued above other things.
I'm not sure I follow in what sense this is a violation of the vNM axioms. A vNM agent has preferences over world-histories; in general one can't isolate the effect of having an apple vs. having an orange without looking at how that affects the entire future history of the world.