Comment author: drethelin 20 June 2014 06:05:53PM 3 points [-]

Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it's possible trying to figure out one's utility function can help prevent eg anxious looping behavior.

Comment author: Qiaochu_Yuan 22 June 2014 06:25:05PM 2 points [-]

Sure, if you're right about your utility function. The failure mode I'm worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.

Comment author: Caspar42 20 June 2014 10:31:48PM 0 points [-]

To me it seems as if utility functions were the most general (deterministic) way to model preferences. So, if we model preferences by "something else", it will usually be some special case of a utility function. Or do you have something even more general than utility functions that is not based on throwing a coin? Or do you propose that we model preferences with randomness?

Comment author: Qiaochu_Yuan 22 June 2014 06:21:10PM *  3 points [-]

There are helpful models and there are unhelpful models. I can model the universe as a wave function in a gigantic Hilbert space, and this is an incredibly general model as it applies to any quantum-mechanical system, but it's not necessarily a helpful model for making predictions at the level I care about most of the time. My claim is that, even if you believe that utility functions can model human preferences (which I also dispute), then it's still true that utility functions are in practice an unhelpful model in this sense.

Comment author: David_Gerard 21 June 2014 09:31:24PM *  1 point [-]

The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.

That said, I concur with your basic point: universal overarching utility functions - not just small ones for a given situation, but a single large one for you as a human - are something humans don't, and I think can't, do - and realising how mathematically helpful it would be if they did still doesn't mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.

(And, I suspect, will merely leave you vulnerable to everyday human-level exploits - remember that the actual threat model we evolved in is beating other humans, and as long as we're dealing with humans we need to deal with humans.)

Comment author: Qiaochu_Yuan 22 June 2014 06:18:54PM *  3 points [-]

The idea is that the universe offers you Dutch-book situations

But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn't constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn't becoming perfect Bayesians.

Comment author: Kaj_Sotala 20 June 2014 07:34:29AM *  9 points [-]
Comment author: Qiaochu_Yuan 20 June 2014 05:33:05PM 2 points [-]

Thanks for the links!

Comment author: MrMind 20 June 2014 08:32:30AM *  1 point [-]

I think that the article is important because it fails critically, that is it serves to identify the fact that morality is important precisely when it's not the result of aggregated preferences.

And we should all know by now how much dangerous a sub-optimal morality can be.

Comment author: Qiaochu_Yuan 20 June 2014 05:22:32PM 12 points [-]

And we should all know by now how much dangerous a sub-optimal morality can be.

Agh, but if you want to solve that problem, the solution is not to criticize everyone who offers a proposal. That is not how you incentivize people to solve a problem.

Comment author: Kaj_Sotala 20 June 2014 10:29:11AM *  17 points [-]

The biggest problematic unstated assumption behind applying VNM-rationality to humans, I think, is the assumption that we're actually trying to maximize something.

To elaborate, the VNM theorem defines preferences by the axiom of completeness, which states that for any two lotteries A and B, one of the following holds: A is preferred to B, B is preferred to A, or one is indifferent between them.

So basically, a “preference” as defined by the axioms is a function that (given the state of the agent and the state of the world in general) outputs an agent’s decision between two or more choices. Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?

By itself, it's not. It simply establishes that the function that outputs the agent’s actions behaves differently in different situations. Now the normal way to establish that this is bad is to assume that all choices are between monetary payouts, and that an agent with inconsistent preferences can be Dutch Booked and made to lose money. An alternative way, which doesn't require us to assume that all the choices are between monetary payouts, is to construct a series of trades between resources that leaves us with less resources than when we started.

Stated that way, this sounds kinda bad. But then there are things that kind of fit that description, but which we would intuitively think of as good. For instance, some time back I asked:

Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.

How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of a Dutch Book, too - they keep repeating this set of trades every evening, and losing lots of time because of that.

In response, I was told that

The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".

The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.

But then I asked that, if we accept this, then what real-life situation does count as an actual circular preference in the VNM sense, given that just about every potential circularity that I can think of is the kind "I prefer A to B at time t1 and B to A at time t2"? And I didn't get very satisfactory replies.

Intuitively, there are a lot of real-life situations that feel kind of like losing out due to inconsistent preferences, like someone who wants to get into a relationship when he's single and then wants to be single when he gets into a relationship, but there our actual problem is that the person spends a lot of time being unhappy, rather than with the fact that he makes different choices in different situations. Whereas with the couple, we think that's fine because they get enjoyment from the "trades".

The general problem that I'm trying to get at is that in order to hold up VNM rationality as a normative standard, we would need to have a meta-preference: a preference over preferences, stating that it would be better to have preferences that lead to some particular outcomes. The standard Dutch Book example kind of smuggles in that assumption by the way that it talks about money, and thus makes us think that we are in a situation where we are only trying to maximize money and care about nothing else. And if you really are trying to only maximize a single concrete variable or resource and care about nothing else, then you really should try to make sure that your choices follow the VNM axioms. If you run a betting office, then do make sure that nobody can Dutch Book you.

But we don't have such a clear normative standard for life in general. It would be reasonable to try to construct an argument for why the couple having sex were rational but the person who kept vacillating about being in a relationship was irrational by suggesting that the couple got happiness whereas the other person was unhappy... but we also care about other things than just happiness (or pleasure) and thus aren't optimizing just for pleasure either. And unless you're a hedonistic utilitarian, you're unlikely to say that we should optimize only for pleasure either.

So basically, if you want to say that people should be VNM-rational, then you need to have some specific set of values or goals that you think people should strive towards. If you don't have that, then VNM-rationality is basically irrelevant aside for the small set of special cases where people really do have a clear explicit goal that's valued above other things.

Comment author: Qiaochu_Yuan 20 June 2014 05:21:37PM 5 points [-]

Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?

I'm not sure I follow in what sense this is a violation of the vNM axioms. A vNM agent has preferences over world-histories; in general one can't isolate the effect of having an apple vs. having an orange without looking at how that affects the entire future history of the world.

Comment author: [deleted] 20 June 2014 02:59:03PM 1 point [-]

On the one hand, you are correct regarding philosophy for humans: we do ethics and meta-ethics to reduce our uncertainty about our utility functions, not as a kind of game-tree planning based on already knowing those functions.

On the other hand, the Von-Neumann-Morgenstern Theorem says blah blah blah blah.

On the third hand, if you have a mathematical structure we can use to make no-Dutch-book decisions that better models the kinds of uncertainty we deal with as embodied human beings in real life, I'm all ears.

In response to comment by [deleted] on Against utility functions
Comment author: Qiaochu_Yuan 20 June 2014 05:18:18PM 7 points [-]

I don't think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don't trust.

Comment author: Nisan 20 June 2014 04:57:26PM 3 points [-]

That comment is about utilitarianism and doesn't mention "utility functions" at all.

Comment author: Qiaochu_Yuan 20 June 2014 05:16:32PM *  3 points [-]

I can't help but suspect, though, that LW people are drawn to utilitarianism because of what they see as the inevitability of using utility functions to model preferences. Maybe this impression is mistaken.

Comment author: Qiaochu_Yuan 20 June 2014 03:30:25AM 18 points [-]

I'm annoyed at how negative the comments on this post are. I think this is a great example of making progress on an apparently philosophical problem by bringing in some nontrivial mathematics (in this case, the idea of using eigenvector decompositions to make sense of circular definitions), and it seems extremely uncharitable to me to judge it for failing to be a fully general and correct solution to the problem when it's obviously not intended to be.

Comment author: evec 19 June 2014 09:29:05PM 4 points [-]

I think your original post would have been better if it included any arguments against utility functions, such as those you mention under "e.g." here.

Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can't tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.

Comment author: Qiaochu_Yuan 20 June 2014 03:17:26AM *  12 points [-]

Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I've been experimenting with that, and I'm keeping the burden of quality deliberately low so that I'll post at all.

View more: Prev | Next