The post Coherent decisions imply consistent utilities demonstrates some situations in which an agent that isn't acting as if it is maximizing a real-valued utility function over lotteries is dominated by one that does, and promised that this applies in general.

However, one intuitively plausible way to make decisions that doesn't involve a real-valued utility and that the arguments in the post don't seem to rule out is to have lexicographic preferences; say, each lottery has payoffs represented as a sequence  and you compare them by first comparing , and if and only if their s are the same compare , and so on, with probabilities multiplying through by each  and payoffs being added element-wise. The VNM axioms exclude this by requiring continuity, with a payoff evaluated like this violating it because  but there is no probability  for which a  probability of a  payoff and a  probability of a  payoff is as exactly as good as a certainty of a  payoff.

Are there coherence theorems that exclude lexicographic preferences like this also?

New Answer
New Comment

4 Answers sorted by

johnswentworth

170

In the same way that we work with delta functions as though they were ordinary functions, we can work with lexicographic preferences as though they were ordinary utility-function preferences.

There's a pattern which comes up in math sometimes where some rules hold for all the functions in some set, and the rules also apply "in the limit" to things "on the boundary" of the set, even if the "boundary" is kinda weird - e.g. a boundary at infinity. Delta functions are a prototypical example: we usually define them as a limit of ordinary functions, and then work with them as though they were ordinary functions. (Though this is not necessarily the best way to think of delta functions - the operator formalism is cleaner is some ways.)

Lexicographic preferences fit this pattern: they are a limiting case of ordinary utility functions. Specifically, given the lexicographic utility sequence  and some real number a, we can construct a single utility function . In the limit as , this converges to the same preferences as the lexicographic utility. So, just like we can work with delta functions like ordinary functions and then take the limit later if we have to (or, more often, just leave the limit implicit everywhere), we can work with lexicographic preferences like ordinary utilities and then take the limit later if we have to (or, more often, just leave the limit implicit everywhere).

To answer your exact question: lexicographic utilities are consistent with the coherence theorems, as long as we drop smoothness/boundedness assumptions. They shouldn't be excluded by coherence considerations.

Slider

20

If one uses surreals for chances then one can provide the required chance to make continuity happen.

There might be multiple ways to map a lexiographic weight to 1 + w + w^2 + w^3... but I would expect that simiarly that real functions can be scaled for no effect using different transfinites would just be a matter of consistency. Ie whether you map (1,2,0) to 1+2w or 1+ 2w^2 is a choice that can be made freely if it is just sticked to.

Then you can have a 1/w=e where 0<e<1 which can function as the p to make the lottery mix exactly ambipreferable.

Nisan

20

Consider the following game: At any time , you may say "stop!", in which case you'll get the lottery that resolves to an outcome you value at with probability , and to an outcome you value at with probability . If you don't say "stop!" in that time period, we set .

Let's say at every instant in you can decide to either say "stop!" or to wait a little longer. (A dubious assumption, as it lets you make infinitely many decisions in finite time.) Then you'll naturally wait until and get a payoff of . It would have been better for you to say "stop!" at , in which case you'd get .

You can similarly argue that it's irrational for your utility to be discontinuous in the amount of wine in your glass: Otherwise you'll let the waiter fill up your glass and then be disappointed the instant it's full.

[This comment is no longer endorsed by its author]

Why would you wait until ? It seems like at any time  the expected payoff will be , which is strictly decreasing with .

5Nisan
Oh you're right, I was confused.

I've no idea if this example has appeared anywhere else. I'm not sure how seriously to take it.

tailcalled

10

I imagine you would be able to solve this by replacing real-valued utilities with utilities in a numbering system that contains infinitesimals. However, it seems to me that it would not matter much in practice, since if you had even a tiny chance of affecting the leading term, then that chance would outweigh all of the other terms in the calculation, and so in practice only the leading term would matter for the decisions.

Or to give an argument from reflection, there's a cost to making decisions, so an agent with a lexicographic ordering would probably want to self-modify into an agent that only considers the leading term in the preference ordering, since then it doesn't have to spend resources on the other terms.

The "leading term will dominate" effect is broken if there are infinidesimal chances around.

It might be sensible for an agent for some purposes to assume away some risks ie treat them as 0 chance. However it might come about that in some circumstances those risks can't be assumed away. So a transformation in the other direction of turning a dirty hack into an actual engine that can accurately work on edgecases might be warranted.

1tailcalled
No, when there are infinitesimal chances, you take the ones which favor your leading term.
1Slider
The trouble is that for some combinations of infinidesimal and transfinite their product can be an ordinary finite real. One can't thus keep them strictly separated in different "lanes".
1tailcalled
Oh, you meant literal infinitesimal (non-real) probabilities. I thought you meant very small probabilities. The argument against lexicographic ordering is also an argument against infinitesimal probabilities. Which is to say, suppose you have some lotteries l1, l2 that you are considering choosing between (the argument can be iterated to apply to a larger number of lotteries), where each lottery ln can have payouts rn1, rn2, rn3, ... with probabilities pn1, pn2, np3, .... Suppose that rather than being real numbers, the probabilities and payouts are ordered lexicographically, with payout m being of shape (rnm1, rnm2, rnm3, ...) and probability m being of shape (pnm1, pnm2, pnm3, ...). In order for the infinitesimal tail of the sequence to matter, you need the sum over r1m1\*p1m1 to be exactly precisely 100% equal to the sum over r2m1\*p2m1 (that is, after all, the definition of lexicographic orderings). If they differ by even the slightest margin (even by 10^-(3^^^^^^3)), this difference outweights anything in the infinitesimal tail. While you could obviously set up such a scenario mathematically, it seems like it would never happen in reality. So while you could in principle imagine lexicographic preferences, I don't think they would matter in reality, and a proper implementation of them would act isomorphic to or want to self-modify into something that only considers the leading term.
2Slider
You need the product to be exactly equal and you don't neccesarily need to do it factor by factor. (0,1)(1,0) can equal (1,0)(0,1) that is neglible chance of a finite reward is as good as a certainty of a neglible reward. Because they lanecross in this way knowing your rewards doesn't mean you can just take the most pressing factor and forget the rest as the propabilities might have impacts that make the expected values switch places. In application it is not straightforwar what things would be well attributed to small finite chances and what would be well attirbuted to infinidesimal chances. As a guess say that you know that some rocks would break under one meteor impact and other rocks would break under two meteor impacts. But you don't know how likely meter impacts are an assume them to be 0. It kinda still remains true that the hard rocks need to have twice as good valuables in them in order to justify to wait around them rather than the soft rocks. If they might contain valuable of immense value at some point it starts to make sense to be more curious about the unknown risks and rewards rather than the known and modelled risks and rewards. Some of the actions and scenarios that are not assumed to be 0 will lean more heavily to the unmodeled parts such as a long plan might have triple the chance of meteors, what ever it is, compared to a short plan. If one insists that everything needs to be real then making up arbitrary small finites for parts of the model that you have little to reason with might get very noisy. With keeping up several archimedean fields around one doesn't force to squish in to a single one. That is if your ordinary plans have an expected value difference of 0.0005 then you can estimate that if meteor impactsw have less effect than that you know your assumtions are effectively safe. However if the differences are 0.000000000000002 then you migth be more paranoid and start to actually look whether all the "assumed 0" assumptions should actual
1tailcalled
In your example, r1m1*p1m1 is exactly equal to r2m1*p2m1; 0*1=0=1*0. The point is that if in the former case, you instead have (eps, 1)(1, 0), or in the latter case, you instead have (1, 0)(eps, 1), then immediately the tail of the sequence becomes irrelevant, because the heads differ. So only when the products of the heads are exactly equal do the tails become relevant. This requires you to not use infinitesimal probabilities. 0.000000000000002 is still infinitely bigger than infinitesimals.
1Slider
I am tripping over the notation a little bit. I was representing eps times 1 as (0,1)(1,0) so (eps, 1)(1, 0) and (1, 0)(eps, 1) both evaluate to 2eps in my mind which would make the tail relevant. If we just have pure lexiographics then it can be undefined whether we can product them together. In my mind I am turning the lexiographic to surreals by using them as weights in a Cantor normal form and then using surreal multiplication. So in effect I have something like (a,b)(c,d)=(ac,ad+bc,bd). I guess I know about "approximately equal" when two numbers would round out to the same nearest real number. The picture might also be complicated on whether immense chances exist. That is if you have a coin that has a finite chance to come up with something and I give you more than finite amount of tries to get it there is only a neglible chance of failure. Then ordinarily if a option has no finite chances of paying out it could be ignored but an immense exploration of a "finite-null" coin which has neglible chances of paying out could matter even at the finite level. And I guess on the other direction are the pascal wagers. Neglible chances of immense rewards. So there are sources other than aligning the finite multipliers to get effects that matter on the finite level.
1tailcalled
Surely this would be your representation for infinitesimals, not for eps? By eps, I mean epsilon in the sense of standard real numbers, i.e. a small but positive real number. The issue is that in the real world, there will always be complexity, noise and uncertainty that will introduce small-but-positive real gaps, and these gaps will outweigh any infinitesimal preference or probability that you might have.
1Slider
ah, I think I am starting to follow. It is a bit ambigious whether it is supposed to be two instances of one arbitraliy small finite or two (perhaps different) arbitrarily small finites. If it is only one then the tails are again relevant. "Always" is a bit risky word especially in connection with infinite. I guess the basic situation is that modelling infinidesimal chances has not proven to be handy. But I don't see that task would be shown to neccesarily frustrate. One could assume that while in theory one could model something in lexiographic way in reality there is some exchange rate between the "lanes" and in that way the blurryness could aid instead of hinder in applicability. Somebody that really likes real-only probablities could insist that unifying the preferences should be done early but there might be benefits in doing it late.
1tailcalled
I don't see what you mean. It doesn't make a different whether you have only one or two small-but-finite quantities (though usually you do have on for each quantity you are dealing with), as long as they are in general position. For instance, in my modification to the example you gave, I only used one: While it's true that the tail becomes relevant in (0, 1)(1, 0) vs (1, 0)(0, 1) because 0*1=1*0, it is not true that the tail becomes relevant in the slightly modified (eps, 1)(1, 0) vs (1, 0)(0, 1) for any real eps != 0, as eps*1 != 1*0. So the only case where infinitesimal preferences are relevant are in astronomically unlikely situations where the head of the comparison exactly cancels out.
1Slider
I thought we were comparing (eps, 1)(1, 0) and (1, 0)(eps, 1). if eps=eps strict equality. if it was (a,1)(1,0) and (1,0)(b,1) and its possible that a!=b it is unsure whether there is equality. yeah (eps,0) behaves differently from (0,1)
1tailcalled
We were comparing epsilon to no-epsilon (what I had in mind with my post). Anyway, the point is that strict equality would require astronomical consequences, and so only be measure 0. So outside of toy examples it would be a waste to consider lexicographic preferences or probabilities.
1 comment, sorted by Click to highlight new comments since:

Not sure if relevant, but we might want to view lexicographic preferences as a formalization of satisficers with different weights combining into one preference structure.