Some people think that they have unbounded utility functions. This isn't necessarily crazy, but it presents serious challenges to conventional decision theory. I think it probably leads to abandoning probability itself as a representation of uncertainty (or at least any hope of basing decision theory on such probabilities). This may seem like a drastic response, but we are talking about some pretty drastic inconsistencies.

This result is closely related to standard impossibility results in infinite ethics. I assume it has appeared in the philosophy literature, but I couldn't find it in the SEP entry on the St. Petersburg paradox so I'm posting it here. (Even if it's well known, I want something simple to link to.)

(ETA: this argument is extremely similar to Beckstead and Thomas' argument against Recklessness in A paradox for tiny probabilities and enormous values. The main difference is that they use transitivity +"recklessness" to get a contradiction whereas I argue directly from "non-timidity." I also end up violating a dominance principle which seems even more surprising to violate, but at this point it's kind of like splitting hairs. I give a slightly stronger set of arguments in Better impossibility results for unbounded utilities.)

Weak version

We'll think of preferences as relations  over probability distributions over some implicit space of outcomes  (and we'll identify outcomes with the constant probability distribution). We'll show that there is no relation  which satisfies three properties: Antisymmetry, Unbounded Utilities, and Dominance.

Note that we assume nothing about the existence of an underlying utility function. We don't even assume that the preference relation is complete or transitive.

The properties

Antisymmetry: It's never the case that both  and .

Unbounded Utilities: there is an infinite sequence of outcomes  each "more than twice as good" as the last.[1] More formally, there exists an outcome  such that:

  •  for every .
  •  [2]

That is,  is not as good as a  chance of , which is not as good as a  chance of , which is not as good as a  chance of ... This is nearly the weakest possible version of unbounded utilities.[3]

Dominance: let  and  be sequences of lotteries, and  be a sequence of probabilities that sum to 1. If  for all , then .

Inconsistency proof

Consider the lottery 

We can write  as a mixture:

By definition . And for each , Unbounded Utilities implies that . Thus Dominance implies , contradicting Antisymmetry.

How to avoid the paradox?

By far the easiest way out is to reject Unbounded Utilities. But that's just a statement about our preferences, so it's not clear we get to "reject" it.

Another common way out is to assume that any two "infinitely good" outcomes are incomparable, and therefore to reject Dominance.[4] This results in being indifferent to receiving $1 in every world (if the expectation is already infinite), or doubling the probability of all good worlds, which seems pretty unsatisfying.

Another option is to simply ignore small probabilities, which again leads to rejecting even the finite version of Dominance---sometimes when you mix together lotteries something will fall below the "ignore it" threshold leading the direction of your preference to reverse. I think this is pretty bizarre behavior, and in general ignoring small probabilities is much less appealing than rejecting Unbounded Utilities.

All of these options seem pretty bad to me. But in the next section, we'll show that if the unbounded utilities are symmetric---if there are both arbitrarily good and arbitrarily bad outcomes---then things get even worse.

Strong version

I expect this argument is also known in the literature; but I don't feel like people around LW usually grapple with exactly how bad it gets.

In this section we'll show there is no relation  which satisfies three properties: Antisymmetry, Symmetric Unbounded Utilities, and Weak Dominance.

(ETA: actually I think that even with only positive utilities you already violate something very close to Weak Dominance, which Beckstead and Thomas call Prospect-Outcome dominance. I find this version of Weak Dominance slightly more compelling, but Symmetric Unbounded Utilities is a much stronger assumption than Unbounded Utilities or non-Timidity, so it's probably worth being aware of both versions. In a footnote[5] I also define an even weaker dominance principle that we are forced to violate.)

The properties

Antisymmetry: It's never the case that both  and .

Symmetric Unbounded Utilities. There is an infinite sequence of outcomes  each of which is "more than twice as important" as the last but with opposite sign. More formally, there is an outcome  such that:

  • For every even  : 
  • For every odd  : 

That is, a certainty of  is outweighed by a  chance of , which is outweighed by a  chance of , which is outweighed by a  chance of ....

Weak Dominance.[5] For any outcome , any sequence of lotteries , and any sequence of probabilities  that sum to 1:

  • If  for every , then .
  • If  for every , then .

Inconsistency proof

Now consider the lottery  

We can write  as the mixture:

By Unbounded Utilities each of these terms is . So by Weak Dominance, 

But we can also write  as the mixture:

By Unbounded Utilities each of these terms is . So by Weak Dominance . This contradicts Antisymmetry.

Now what?

As usual, the easiest way out is to abandon Unbounded Utilities. But if that's just the way you feel about extreme outcomes, then you're in a sticky situation.

You could allow for unbounded utilities as long as they only go in one direction. For example, you might be open to the possibility of arbitrarily bad outcomes but not the possibility of arbitrarily good outcomes.[6] But the asymmetric version of unbounded utilities doesn't seem very intuitively appealing, and you still have to give up the ability to compare any two infinitely good outcomes (violating Dominance).

People like talking about extensions of the real numbers, but those don't help you avoid any of the contradictions above. For example, if you want to extend  to a preference order over hyperreal lotteries, it's just even harder for it to be consistent.

Giving up on Weak Dominance seems pretty drastic. At that point you are talking about probability distributions, but I don't think you're really using them for decision theory---it's hard to think of a more fundamental axiom to violate. Other than Antisymmetry, which is your other option.

At this point I think the most appealing option, for someone committed to unbounded utilities, is actually much more drastic: I think you should give up on probabilities as an abstraction for describing uncertainty, and should not try to have a preference relation over lotteries at all.[7] There are no ontologically fundamental lotteries to decide between, so this isn't necessarily so bad. Instead you can go back to talking directly about preferences over uncertain states of affairs, and build a totally different kind of machinery to understand or analyze those preferences.

ETA: replacing dominance

Since writing the above I've become more sympathetic to violations of Dominance and even Weak Dominance---it would be pretty jarring to give up on them, but I can at least imagine it. I still think violating "Very Weak Dominance"[5] is pretty bad, but I don't think it captures the full weirdness of the situation.

So in this section I'll try to replace Weak Dominance by a principle I find even more robust: if I am indifferent between  and any of the lotteries , then I'm also indifferent between X and any mixture of the lotteries . This isn't strictly weaker than Weak Dominance, but violating it feels even weirder to me. At any rate, it's another fairly strong impossibility result constraining unbounded utilities. 

The properties

We'll work with a relation  over lotteries. We write  if both  and . We write  if  but not . We'll show that  can't satisfy four properties: Transitivity, Intermediate mixtures, Continuous symmetric unbounded utilities, and Indifference to homogeneous mixtures.

Intermediate mixtures. If , then 

Transitivity. If  and  then .

Continuous symmetric unbounded utilities. There is an infinite sequence of lotteries  each of which is "exactly twice as important" as the last but with opposite sign. More formally, there is an outcome  such that:

  • For every even  : 
  • For every odd  : 

That is, a certainty of  is exactly offset by a  chance of , which is exactly offset by a  chance of , which is exactly offset by a  chance of ....

Intuitively, this principle is kind of like symmetric unbounded utilities, but we assume that it's possible to dial down each of the outcomes in the sequence (perhaps by mixing it with ) until the inequalities become exact equalities.

Homogeneous mixtures. Let  be an outcome, , a sequence of lotteries, and  be a sequence of probabilities summing to 1. If  for all , then .

Inconsistency proof

Consider the lottery  

We can write  as the mixture:

By Unbounded Utilities each of these terms is . So by homogeneous mixtures, 

But we can also write  as the mixture:

By Unbounded Utilities each of these terms other than the first is . So by Homogenous Mixtures, the combination of all terms other than the first is . Together with the fact that , Intermediate Mixtures and Transitivity imply . But that contradicts .

  1. ^

    Note that we could replace "more than twice as good" with "at least 0.00001% better" and obtain exactly the same result. You may find this modified version of the principle more appealing, and it is closer to non-timidity as defined in Beckstead and Thomas. Note that the modified principle implies the original by applying transitivity 100000 times, but you don't actually need to apply transitivity to get a contradiction, you can just apply Dominance to a different mixture.

  2. ^

    You may wonder why we don't just write . If we did this, we'd need to introduce an additional assumption that if . This would be fine, but it seemed nicer to save some symbols and make a slightly weaker assumption.

  3. ^

    The only plausibly-weaker definition I see is to say that there are outcomes  and an infinite sequence  such that for all . If we replaced the  with  then this would be stronger than our version, but with the inequality it's not actually sufficient for a paradox.

    To see this, consider a universe with three outcomes  and a preference order  that always prefers lotteries with higher probability of  and breaks ties using by preferring a higher probability of . This satisfies all of our other properties. It satisfies the weaker version of the axiom by taking  for all , and it wouldn't be crazy to say that it has "unbounded" utilities.

  4. ^

    For realistic agents who think unbounded utilities are possible, it seems like they should assign positive probability to encountering a St. Petersburg paradox such that all decisions have infinite expected utility. So this is quite a drastic thing to give up on. See also: Pascal's mugging.

  5. ^

    I find this principle pretty solid, but it's worth noting that the same inconsistency proof would work for the even weaker "Very Weak Dominance": for any pair of outcomes with , and any sequence of lotteries  each strictly better than , any mixture of the  should at least be strictly better than !

  6. ^

    Technically you can also violate Symmetric Unbalanced Utility while having both arbitrarily good and arbitrarily bad outcomes, as long as those outcomes aren't comparable to one another. For example, suppose that worlds have a real-valued amount of suffering and a real-valued amount of pleasure. Then we could have a lexical preference for minimizing expected suffering (considering all worlds with infinite expected suffering as incomparable), and try to maximize pleasure only as a tie-breaker (considering all worlds with infinite expected pleasure as incomparable).

  7. ^

    Instead you could keep probabilities but abandon infinite probability distributions. But at this point I'm not exactly sure what unbounded utilities means---if each decision involves only finitely many outcomes, then in what sense do all the other outcomes exist? Perhaps I may face infinitely many possible decisions, but each involves only finitely many outcomes? But then what am I to make of my parent's decisions while raising me, which affected my behavior in each of those infinitely many possible decisions? It seems like they face an infinite mixture of possible outcomes. Overall, it seems to me like giving up on infinitely big probability distributions implies giving up on the spirit of unbounded utilities, or else going down an even stranger road.

New Comment
111 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am not a fan of unbounded utilities, but it is worth noting that most (all?) the problems with unbounded utilties are actually a problem with utility functions that are not integrable with respect to your probabilities. It feels basically okay to me to have unbounded utilities as long as extremely good/bad events are also sufficiently unlikely.

The space of allowable probability functions that go with an unbounded utility can still be closed under finite mixtures and conditioning on positive probability events. 

Indeed, if you think of utility functions as coming from VNM, and you a space of lotteries closed under finite mixtures but not arbitrary mixtures, I think there are VNM preferences that can only correspond to unbounded utility functions, and the space of lotteries is such that you can't make St. Petersburg paradoxes. (I am guessing, I didn't check this.)

  1. I strongly agree that the key problem with St. Petersburg (and Pasadena) paradoxes is utility not being integrable with respect to the lotteries/probabilities. Non-integrability is precisely what makes 𝔼U undefined (as a real number), whereas unboundedness of U alone does not.
  2. However, it’s also worth pointing out that the space of functions which are guaranteed to be integrable with respect to any probability measure is exactly the space of bounded (measurable) functions. So if one wants to save utilities’ unboundedness by arguing from integrability, that requires accepting some constraints on one’s beliefs (e.g., that they be finitely supported). If one doesn’t want to accept any constraints on beliefs, then accepting a boundedness constraint on utility looks like a very natural alternative.
  3. I agree with your last paragraph. If the state space of the world is ℝ, and the utility function is the identity function, then the induced preferences over finitely-supported lotteries can only be represented by unbounded utility functions, but are also consistent and closed under finite mixtures.
  4. Finite support feels like a really harsh constraint on beliefs. I wonder if there are some ot
... (read more)
9Scott Garrabrant
Note that if P dominates Q in the sense that there is a c>0 such that for all events E, P(E)>c⋅Q(E), U is integrable wrt P, then I think U is integrable wrt Q. I propose the space of all probability distribution dominated by a given distribution P. Conveniently, if we move to semi-measures, we can take P to be the universal semi-measure. I think we can have our space of utility functions be anything integrable WRT the universal semi-measure, and our space of probabilities be anything lower semi-computable, and everything will work out nicely.

I think bounded functions are the only computable functions that are integrable WRT the universal semi-measure. I think this is equivalent to de Blanc 2007?

The construction is just the obvious one: for any unbounded computable utility function, any universal semi-measure must assign reasonable probability to a St Petersburg game for that utility function (since we can construct a computable St Petersburg game by picking a utility randomly then looping over outcomes until we find one of at least the desired utility).

3AlexMennen
Compact support still seems like an unreasonably strict constraint to me, not much less so than finite support. Compactness can be thought of as a topological generalization of finiteness, so, on a noncompact space, compact support means assigning probability 1 to a subset that's infinitely tiny compared to its complement.
9Scott Garrabrant
I observe that I probably miscommunicated. I think multiple people took me to be arguing for a space of lotteries with finite support. That is NOT what I meant. That is sufficient, but I meant something more general when I said "lotteries closed under finite mixtures" I did not mean there only finitely many atomic worlds in the lottery. I only meant that there is a space of lotteries, some of which maybe have infinite support if you want to think about atomic worlds, and for any finite set of lotteries, you can take a finite mixture of those lotteries to get a new lottery in the space. The space of lotteries has to be closed under finite mixtures for VNM to make sense, but the emphasis is on the fact that it is not closed under all possible countable mixtures, not that the mixtures have finite support.
4Charlie Steiner
Hm, what would that last thing look like? Like, I agree that you can have gambles closed under finite but not countable gambling and the math works. But it seems like reality is a countably-additive sort of a place. E.g. if these different outcomes of a lottery are physical states of some system, QM is going to tell you to take some infinite sums. I'm just generally having trouble getting a grasp on what the world (and our epistemic state re. the world) would look like for this finite gambles stuff to make sense.
6Scott Garrabrant
Note that you can take infinite sums, without being able to take all possible infinite sums.  I suspect it looks like you have a prior distribution, and the allowable probability distributions are those that you can get to from this distribution using finitely many bits of evidence.
2[comment deleted]

I think this argument is cool, and I appreciate how distilled it is.

Basically just repeating what Scott said but in my own tongue: this argument leaves open the option of denying that (epistemic) probabilities are closed under countable combination, and deploying some sort of "leverage penalty" that penalizes extremely high-utility outcomes as extremely unlikely a priori.

I agree with your note that the simplicitly prior doesn't implement leverage penalties. I also note that I'm pretty uncertain myself about how to pull off leverage penalties correctly, assuming they're a good idea (which isn't clear to me).

I note further that the issue as I see it arises even when all utilities are finite, but some are ("mathematically", not merely cosmically) large (where numbers like 10^100 are cosmically large, and numbers like 3^^^3 are mathematically large). Like, why are our actions not dominated by situations where the universe is mathematically large? When I introspect, it doesn't quite feel like the answer is "because we're certain it isn't", nor "because utility maxes out at the cosmological scale", but rather something more like "how would you learn that there may or may not be 3^^^3 hap... (read more)

TL;DR: I think that the discussion in this post is most relevant when we talk about the utility of whole universes. And for that purpose, I think a leverage penalty doesn't make sense.

A leverage penalty seems more appropriate for saying something like "it's very unlikely that my decisions would have such a giant impact," but I don't think that should be handled in the utility function or decision theory.

Instead, I'd say: if it's possible to have "pivotal" decisions that affect 3^^^3 people, then it's also possible to have 3^^^3 people in "normal" situations all making their separate (correlated) decisions, eating 3^^^3 sandwiches, and so the stakes of everything are similarly mathematically big.

plus a sense that you should be suspicious that any given action is more likely to get 3^^^3 utility than any other

I think that if utilities are large but bounded, then I feel like everything "adds up to normality"---if there is a way to get 3^^^3 utility, it seems like "maximize option value, figure out what's going on, stay sane" is a reasonable bet for maximizing EV (e.g. by maximizing probability of the great outcome).

Intuitively, this also seems like what you should end up doing even if... (read more)

if it's possible to have "pivotal" decisions that affect 3^^^3 people, then it's also possible to have 3^^^3 people in "normal" situations all making their separate (correlated) decisions, eating 3^^^3 sandwiches, and so the stakes of everything are similarly mathematically big.

Agreed.

This seems to put you in a strange position though: you are not only saying that high-value outcomes are unlikely, but that you have no preferences about them. That is, they aren't merely impossible-in-reality, they are impossible-in-thought-experiments.

Perhaps I'm being dense, but I don't follow this point. If I deny that my epistemic probabilities are closed under countable weighted sums, and assert that the hypothesis "you can actually play a St. Petersburg game for n steps" is less likely than it is easy-to-describe (as n gets large), in what sense does that render me unable to consider St. Petersburg games in thought experiments?

How would you learn that there may or may not be a 10^100 future people with our choices as the fulcrum? Why would the same process not generalize? (And if it may happen in the future but not now, is that 0 probability?)

The same process generalizes.

My point was... (read more)

8paulfchristiano
My point was that this doesn't seem consistent with anything like a leverage penalty. My point was that we can say lots about which actions are more or less likely to generate 3^^^3 utility even without knowing how the universe got so large. (And then this appears to have relatively clear implications for our behavior today, e.g. by influencing our best guesses about the degree of moral convergence.) In terms of preferences, I'm just saying that it's not the case that for every universe, there is another possible universe so much bigger that I care only 1% as much about what happens in the smaller universe. If you look at a 10^20 universe and the 10^30 universe that are equally simple, I'm like "I care about what happens in both of those universes. It's possible I care about the 10^30 universe 2x as much, but it might be more like 1.000001x as much or 1x as much, and it's not plausible I care 10^10 as much." That means I care about each individual life less if it happens in a big universe. This isn't why I believe the view, but one way you might be able to better sympathize is by thinking: "There is another universe that is like the 10^20 universe but copied 10^10 times. That's not that much more complex than the 10^20 universe. And in fact total observer counts were already dominated by copies of those universes that were tiled 3^^^3 times, and the description complexity difference between 3^^^3 and 10^10 x 3^^^3 are not very large." Of course unbounded utilities don't admit that kind of reasoning, because they don't admit any kind of reasoning. And indeed, the fact that the expectations diverge seem very closely related to the exact reasoning you would care most about doing in order to actually assess the relative importance of different decisions, so I don't think the infinity thing is a weird case, it seems absolutely central and I don't even know how to talk about what the view should be if the infinites didn't diverge. I'm not very intuitively drawn to vie

My point was that this doesn't seem consistent with anything like a leverage penalty.

I'm not particulalry enthusiastic about "artificial leverage penalties" that manually penalize the hypothesis you can get 3^^^3 happy people by a factor of 1/3^^^3 (and so insofar as that's what you're saying, I agree).

From my end, the core of my objection feels more like "you have an extra implicit assumption that lotteries are closed under countable combination, and I'm not sold on that." The part where I go "and maybe some sufficiently naturalistic prior ends up thinking long St. Petersburg games are ultimately less likely than they are simple???" feels to me more like a parenthetical, and a wild guess about how the weakpoint in your argument could resolve.

(My guess is that you mean something more narrow and specific by "leverage penalty" than I did, and that me using those words caused confusion. I'm happy to retreat to a broader term, that includes things like "big gambles just turn out not to unbalance naturalistic reasoning when you're doing it properly (eg. b/c finding-yourself-in-the-universe correctly handles this sort of thing somehow)", if you have one.)

(My guess is that part of the ... (read more)

From my end, the core of my objection feels more like "you have an extra implicit assumption that lotteries are closed under countable combination, and I'm not sold on that." [...] It seems to me that your argument contains a fourth, unlisted assumption, which is that lotteries are closed under countable combination. Do you agree?

My formal argument is even worse than that: I assume you have preferences over totally arbitrary probability distributions over outcomes!

I don't think this is unlisted though---right at the beginning I said we were proving theorems about a preference ordering  defined over the space of probability distributions over a space of outcomes . I absolutely think it's plausible to reject that starting premise (and indeed I suggest that someone with "unbounded utilities" ought to reject this premise in an even more dramatic way).

If you're trying to object to some other thing I said about leverage penalties, my guess is that I miscommunicated my position

It seems to me that our actual situation (i.e. my actual subjective distribution over possible worlds) is divergent in the same way as the St Petersburg lottery, at least with respect to quantiti... (read more)

Ok, cool, I think I see where you're coming from now.

I don't think this is unlisted though ...

Fair! To a large degree, I was just being daft. Thanks for the clarification.

It seems to me that our actual situation (i.e. my actual subjective distribution over possible worlds) is divergent in the same way as the St Petersburg lottery, at least with respect to quantities like expected # of happy people.

I think this is a good point, and I hadn't had this thought quite this explicitly myself, and it shifts me a little. (Thanks!)

(I'm not terribly sold on this point myself, but I agree that it's a crux of the matter, and I'm sympathetic.)

But at that point it seems much more likely that preferences just aren't defined over probability distributions at all

This might be where we part ways? I'm not sure. A bunch of my guesses do kinda look like things you might describe as "preferences not being defined over probability distributions" (eg, "utility is a number, not a function"). But simultaneously, I feel solid in my ability to use probabliity distributions and utility functions in day-to-day reasoning problems after I've chunked the world into a small finite number of possible acti... (read more)

9paulfchristiano
I agree with this: (i) it feels true and would be surprising not to add up to normality, (ii) coherence theorems suggest that any preferences can be represented as probabilities+utilities in the case of finitely many outcomes. This is my view as well, but you still need to handle the dependence on subjective uncertainty. I think the core thing at issue is whether that uncertainty is represented by a probability distribution (where utility is an expectation). (Slightly less important: my most naive guess is that the utility number is itself represented as a sum over objects, and then we might use "utility function" to refer to the thing being summed.) I don't mean that we face some small chance of encountering a St Petersburg lottery. I mean that when I actually think about the scale of the universe, and what I ought to believe about physics, I just immediately run into St Petersburg-style cases: * It's unclear whether we can have an extraordinarily long-lived civilization if we reduce entropy consumption to ~0 (e.g. by having a reversible civilization). That looks like at least 5% probability, and would suggest the number of happy lives is much more than 10100 times larger than I might have thought. So does it dominate the expectation? * But nearly-reversible civilizations can also have exponential returns to the resources they are able to acquire during the messy phase of the universe. Maybe that happens with only 1% probability, but it corresponds to yet bigger civilization. So does that mean we should think that colonizing faster increases the value of the future by 1%, or by 100% since these possibilities are bigger and better and dominate the expectation? * But also it seems quite plausible that our universe is already even-more-exponentially spatially vast, and we merely can't reach parts of it (but a large fraction of them are nevertheless filled with other civilizations like ours). Perhaps that's 20%. So it actually looks more likely than the "long-li
6So8res
(I, in fact, lifted it off of you, a number of years ago :-p) Of course. (And noting that I am, perhaps, more openly confused about how to handle the subjective uncertainty than you are, given my confusions around things like logical uncertainty and whether difficult-to-normalize arithmetical expressions meaningfully denote numbers.) Running through your examples: I agree. Separately, I note that I doubt total Fun is linear in how much compute is available to civilization; continuity with the past & satisfactory completion of narrative arcs started in the past is worth something, from which we deduce that wiping out civilization and replacing it with another different civilization of similar flourish and with 2x as much space to flourish in, is not 2x as good as leaving the original civilization alone. But I'm basically like "yep, whether we can get reversibly-computed Fun chugging away through the high-entropy phase of the universe seems like an empiricle question with cosmically large swings in utility associated therewith." This seems fairly plausible to me! For instance, my best guess is that you can get more than 2x the Fun by computing two people interacting than by computing two individuals separately. (Although my best guess is also that this effect diminishes at scale, \shrug.) By my lights, it sure would be nice to have more clarity on this stuff before needing to decide how much to rush our expansion. (Although, like, 1st world problems.) Sure, this is pretty plausible, but (arguendo) it shouldn't really be factoring into our action analysis, b/c of the part where we can't reach it. \shrug Sure. And again (arguendo) this doesn't much matter to us b/c the others are beyond our sphere of influence. I think this is where I get off the train (at least insofar as I entertain unbounded-utility hypotheses). Like, our ability to reversibly compute in the high-entropy regime is bounded by our error-correction capabilities, and we really start needing to up
4Vanessa Kosoy
A side note: IB physicalisms solves at least a large chunk of naturalism/counterfactuals/anthropics but is almost orthogonal to this entire issue (i.e. physicalist loss functions should still be bounded for the same reason cartesian loss functions should be bounded), so I'm pretty skeptical there's anything in that direction. The only part which is somewhat relevant is: IB physicalists have loss functions that depend on which computations are running so two exact copies of the same thing definitely count as the same and not twice as much (except potentially in some indirect way, such as being involved together in a single more complex computation).
2So8res
I am definitely entertaining the hypothesis that the solution to naturalism/anthropics is in no way related to unbounded utilities. (From my perspective, IB physicalism looks like a guess that shows how this could be so, rather than something I know to be a solution, ofc. (And as I said to Paul, the observation that would update me in favor of it would be demonstrated mastery of, and unravelling of, my own related confusions.))
4Vanessa Kosoy
In the parenthetical remark, are you talking about confusions related to Pascal-mugging-type thought experiments, or other confusions?
2So8res
Those & others. I flailed towards a bunch of others in my thread w/ Paul. Throwing out some taglines: * "does logic or physics come first???" * "does it even make sense to think of outcomes as being mathematical universes???" * "should I even be willing to admit that the expression "3^^^3" denotes a number before taking time proportional to at least log(3^^^3) to normalize it?" * "is the thing I care about more like which-computations-physics-instantiates, or more like the-results-of-various-computations??? is there even a difference?" * "how does the fact that larger quantum amplitudes correspond to more magical happening-ness relate to the question of how much more I should care about a simulation running on a computer with wires that are twice as thick???" Note that these aren't supposed to be particularly well-formed questions. (They're more like handles for my own confusions.) Note that I'm open to the hypothesis that you can resolve some but not others. From my own state of confusion, I'm not sure which issues are interwoven, and it's plausible to me that you, from a state of greater clarity, can see independences that I cannot. Note that I'm not asking for you to show me how IB physicalism chooses a consistent set of answers to some formal interpretations of my confusion-handles. That's the sort of (non-trivial and virtuous!) feat that causes me to rate IB physicalism as a "plausible guess". In the specific case of IB physicalism, I'm like "maaaybe? I don't yet see how to relate this Γ that you suggestively refer to as a 'map from programs to results' to a philosophical stance on computation and instantiation that I understand" and "I'm still not sold on the idea of handling non-realizability with inframeasures (on account of how I still feel confused about a bunch of things that inframeasures seem like a plausible guess for how to solve)" and etc. Maybe at some point I'll write more about the difference, in my accounting, between plausible guesses
4Vanessa Kosoy
Hmm... I could definitely say stuff about, what's the IB physicalism take on those questions. But this would be what you specifically said you're not asking me to do. So, from my perspective addressing your confusion seems like a completely illegible task atm. Maybe the explanation you alluded to in the last paragraph would help.
3So8res
I'd be happy to read it if you're so inclined and think the prompt would help you refine your own thoughts, but yeah, my anticipation is that it would mostly be updating my (already decent) probability that IB physicalism is a reasonable guess. A few words on the sort of thing that would update me, in hopes of making it slightly more legible sooner rather than later/never: there's a difference between giving the correct answer to metaethics ("'goodness' refers to an objective (but complicated, and not objectively compelling) logical fact, which was physically shadowed by brains on account of the specifics of natural selection and the ancestral environment"), and the sort of argumentation that, like, walks someone from their confused state to the right answer (eg, Eliezer's metaethics sequence). Like, the confused person is still in a state of "it seems to me that either morality must be objectively compelling, or nothing truly matters", and telling them your favorite theory isn't really engaging with their intuitions. Demonstrating that your favorite theory can give consistent answers to all their questions is something, it's evidence that you have at least produced a plausible guess. But from their confused perspective, lots of people (including the nihilists, including the Bible-based moral realists) can confidently provide answers that seem superficially consistent. The compelling thing, at least to me and my ilk, is the demonstration of mastery and the ability to build a path from the starting intuitions to the conclusion. In the case of a person confused about metaethics, this might correspond to the ability to deconstruct the "morality must be objectively compelling, or nothing truly matters" intuition, right in front of them, such that they can recognize all the pieces inside themselves, and with a flash of clarity see the knot they were tying themselves into. At which point you can help them untie the knot, and tug on the strings, and slowly work your way
4Vanessa Kosoy
I don't think I'm capable of writing something like the metaethics sequence about IB, that's a job for someone else. My own way of evaluating philosophical claims is more like: * Can we a build an elegant, coherent mathematical theory around the claim? * Does the theory meet reasonable desiderata? * Does the theory play nicely with other theories we have high confidence of? * If there are compelling desiderata the theory doesn't meet, can we show that meeting them is impossible? For example, the way I understood objective morality is wrong was by (i) seeing that there's a coherent theory of agents with any utility function whatsoever (ii) understanding that, in terms of the physical world, "Vanessa's utility function" is more analogous to "coastline of Africa" than to "fundamental equations of physics". I agree that explaining why we have certain intuitions is a valuable source of evidence, but it's entangled with messy details of human psychology that create a lot of noise. (Notice that I'm not saying you shouldn't use intuition, obviously intuition is an irreplaceable core part of cognition. I'm saying that explaining intuition using models of the mind, while possible and desirable, is also made difficult by the messy complexity of human minds, which in particular introduces a lot of variables that vary between people.) Also, I want to comment on your last tagline, just because it's too tempting: I haven't written the proofs cleanly yet (because prioritizing other projects atm), but it seems that IB physicalism produces a rather elegant interpretation of QM. Many-worlds turns out to be false. The wavefunction is not "a thing that exists". Instead, what exists is the outcomes of all possible measurements. The universe samples those outcomes from a distribution that is determined by two properties: (i) the marginal distribution of each measurement has to obey the Born rule (ii) the overall amount of computation done by the universe should be minimal. It fol
1TAG
What's ordinary randomness?
7Vanessa Kosoy
I think that this confusion results from failing to distinguish between your individual utility function and the "effective social utility function" (the result of cooperative bargaining between all individuals in a society). The individual utility function is bounded on a scale which is roughly comparable to Dunbar's number[1]. The effective social utility function is bounded on a scale comparable to the current size of humanity. When you conflate them, the current size of humanity seems like a strangely arbitrary parameter so you're tempted to decide the utility function is unbounded. The reason why distinguishing between those two is so hard, is because there are strong social incentives to conflate them, incentives which our instincts are honed to pick up on. Pretending to unconditionally follow social norms is a great way to seem trustworthy. When you combine it with an analytic mindset that's inclined to reasoning with explicit utility functions, this self-deception takes the form of modeling your intrinsic preferences by utilitarianism. Another complication is, larger universes tend to be more diverse and hence more interesting. But this also saturates somewhere (having e.g. 10100 books to choose from is not noticeably better from having 1050 books to choose from). ---------------------------------------- 1. It seems plausible to me both for explaining how people behave in practice and in terms of evolutionary psychology. ↩︎

The proof doesn't run for me. The only way I know to be able to rearrange the terms in a infinite series is if the starting starting series converges and the resultant series converges. The series doesn't fullfill the condition so I am not convinced the rewrite is a safe step.

I am a bit unsure about my maths so I am going to hyberbole the kind of flawed logic I read into the proof. Start with series that might not converge 1+1+1+1+1+1... (oh it indeed blatantly diverges) then split each term to have a non-effective addition (1+0)+(1+0)+(1+0)+(1+0)... . Blatantly disregard safety rules about paranthesis messing with series and just treat them as paranthesis that follow familiar rules 1+0+1+0+1+0+1+0+1... so 1+1+1+1... is not equal to itself. (unsafe step leads to non-sense)

With converging series it doesn't matter whether we get "twice as fast" to the limit but the "rate of ascension" might matter to whatever analog a divergent series would have to a value.

7Maximum_Skull
The correct condition for real numbers would be absolute convergence (otherwise the sum after rearrangement might become different and/or infinite) but you are right: the series rearrangement is definitely illegal here.
4paulfchristiano
But in the post I'm rearranging a series of probabilities, 12,14,… which is very legal. The fact that you can't rearrange infinite sums is an intuitive reason to reject Weak Dominance,  and then the question is how you feel about that.
5Maximum_Skull
Those probabilities are multiplied by Xis, which makes it more complicated. If I try running it with Xs being the real numbers (which is probably the most popular choice for utility measurement), the proof breaks down. If I, for example, allow negative utilities, I can rearrange the series from a divergent one into a convergent one and vice versa, trivially leading to a contradiction just from the fact that I am allowed to do weird things with infinite series, and not because of proposed axioms being contradictory. EDIT: concisely, your axioms do not imply that the rearrangement should result in the same utility.
8LGS
The rearrangement property you're rejecting is basically what Paul is calling the "rules of probability" that he is considering rejecting. If you have a probability distribution over infinitely (but countably) many probability distributions, each of which is of finite support, then it is in fact legal to "expand out" the probabilities to get one distribution over the underlying (countably infinite) domain.  This is standard in probability theory, and it implies the rearrangement property that bothers you.
2Maximum_Skull
Oh, thanks, I did not think about that! Now everything makes much more sense.
6paulfchristiano
I'm not rearranging a sum of real numbers. I'm showing that no relationship < over probability distributions satisfies a given dominance condition.
5Slider
I am not familiar with the rules of lotteries and mixtures to know whether the mixture rewrite is valid or not. If the outcomes were for example money payouts then the operations carried out would be invalid. I would be surprised if somehow the rules for lotteries made this okay. The bit where there is too much implicit steps for me is I would benefit from babystepping throught this process or atleast pointers what I need to learn to be convinced of this
6paulfchristiano
I'm using the usual machinery of probability theory, and particularly countable additivity. It may be reasonable to give up on that, and so I think the biggest assumption I made at the beginning was that we were defining a probability distribution over arbitrary lotteries and working with the space of probability distributions. A way to look at it is: the thing I'm taking sums over are the probabilities of possible outcomes. I'm never talking anywhere about utilities or cash payouts or anything else. The fact that I labeled some symbols X8 does not mean that the real number 8 is involved anywhere. But these sums over the probabilities of worlds are extremely convergent. I'm not doing any "rearrangement," I'm just calculating ∑∞k=n+112k=12n.
9gjm
So there are some missing axioms here, describing what happens when you construct lotteries out of other lotteries. Specifically, the rearranging step Slider asks about is not justified by the explicitly given axioms alone: it needs something along the lines of "if for each i we have a lottery ∑jpijXj, then the values of the lotteries ∑iqi(∑jpijXj) and ∑j(∑iqipij)Xj are equal". (Your derivation only actually uses this in the special case where for each i only finitely many of the pij are nonzero.) You might want to say either that these two "different" lotteries have equal value, or else that they are in fact the same lottery. In either case, it seems to me that someone might dispute the axiom in question (intuitively obvious though it seems, just like the others). You've chosen a notation for lotteries that makes an analogy with infinite series; if we take this seriously, we notice that this sort of rearrangement absolutely can change whether the series converges and to what value if so. How sure are you that rearranging lotteries is safer than rearranging sums of real numbers? (The sums of the probabilities are extremely convergent, yes. But the probabilities are (formally) multiplying outcomes whose values we are supposing are correspondingly divergent. Again, I am not sure I want to assume that this sort of manipulation is safe.)
4paulfchristiano
I'm handling lotteries as probability distributions over an outcome space Ω, not as formal sums of outcomes. To make things simple you can assume Ω is countable. Then a lottery A assigns a real number A(ω) to each ω∈Ω, representing its probability under the lottery A, such that ∑ω∈ΩA(ω)=1. The sum ∑piAi is defined by (∑piAi)(ω)=∑piAi(ω). And all these infinite sums of real numbers are in turn defined as the suprema of the finite sums which are easily seen to exist and to still sum to 1. (All of this is conventional notation.) Then  ∑iqi(∑jpijAj)and  ∑j(∑iqipij)Aj are exactly equal.
4gjm
OK! But I still feel like there's something being swept under the carpet here. And I think I've managed to put my finger on what's bothering me. There are various things we could require our agents to have preferences over, but I am not sure that probability distributions over outcomes is the best choice. (Even though I do agree that the things we want our agents to have preferences over have essentially the same probabilistic structure.) A weaker assumptions we might make about agents' preferences is that they are over possibly-uncertain situations, expressed in terms of the agent's epistemic state. And I don't think "nested" possibly-uncertain-situations even exist. There is no such thing as assigning 50% probability to each of (1) assigning 50% probability to each of A and B, and (2) assigning 50% probability to each of A and C. There is such a thing as assigning 50% probability now to assigning those different probabilities in five minutes, and by the law of iterated expectations your final probabilities for A,B,C must then obey the distributive law, but the situations are still not literally the same, and I think that in divergent-utility situations we can't assume that your preferences depend only on the final outcome distribution. Another way to say this is that, given that the Ai and Bi are lotteries rather than actual outcomes and that combinations like ∑piAi mean something more complicated than they may initially look like they mean, the dominance axioms are less obvious than the notation makes them look, and even though there are no divergences in the sums-over-probabilities that arise when you do the calculations there are divergences in implied something-like-sums-over-weighted utilities, and in my formulation you really are having to rearrange outcomes as well as probabilities when you do the calculations.
4paulfchristiano
I agree that in the real world you'd have something like "I'm uncertain about whether X or Y will happen, call it 50/50. If X happens, I'm 50/50 about whether A or B will happen. If Y happens, I'm 50/50 about whether B or C will happen." And it's not obvious that this should be the same as being 50/50 between B or X, and conditioned on X being 50/50 between A or C. Having those two situations be different is kind of what I mean by giving up on probabilities---your preferences are no longer a function of the probability that outcomes occur, they are a more complicated function of your epistemic state, and so it's not correct to summarize your epistemic state as a probability distribution over outcomes. I don't think this is totally crazy, but I think it's worth recognizing it as a fairly drastic move.
1Bunthut
Would a decision theory like this count as "giving up on probabilities" in the sense in which you mean it here?
1davidad
To anyone who is still not convinced—that last move, ∑i∑jqipijAj=∑j∑iqipijAj, is justified by Tonelli’s theorem, merely because qipijAj(ω)≥0 (for all i,j,ω).
6davidad
The way I look at this is that objects like 12X0+12X1 live in a function space like X→R≥0, specifically the subspace of that where the functions f are integrable with respect to counting measure on X and ∑x∈Xf(x)=1. In other words, objects like f1:=12X0+12X1 are probability mass functions (pmf). f1(X0) is 12, and f1(X1) is 12, and f1 of anything else is 0. When we write what looks like an infinite series λ1f1+λ2f2+⋯, what this really means is that we’re defining a new f by pointwise infinite summation: f(x):=∑∞i=1λifi(x). So only each collection of terms that contains a given Xk needs to form a convergent series in order for this new f to be well-defined. And for it to equal another f′, the convergent sums only need to be equal pointwise (for each Xk, f(Xk)=f′(Xk)). In Paul’s proof above, the only Xk for which the collection of terms containing it is even infinite is X0. That’s the reason he’s “just calculating” that one sum.
2Slider
The outcomes have the property that they are step-wise more than double the worth. In X∞=12X0+14X1+18X2+116X4+… the real part only halfs on each term. So as the series goes on each term gets bigger and bigger instead of smaller and smaller and smaller associated with convergent-like scenario. So it seems to me that even in isolation this is a divergent-like series.
2justinpombrio
Here's a concrete example. Start with a sum that converges to 0 (in fact every partial sum is 0): 0 + 0 + ... Regroup the terms a bit: = (1 + -1) + (1 + -1) + ... = 1 + (-1 + 1) + (-1 + 1) + ... = 1 + 0 + 0 + ... and you get a sum that converges to 1 (in fact every partial sum is 1). I realize that the things you're summing are probability distributions over outcomes and not real numbers, but do you have reason to believe that they're better behaved than real numbers in infinite sums? I'm not immediately seeing how countable additivity helps. Sorry if that should be obvious.
2tailcalled
Your argument doesn't go through if you restrict yourself to infinite weighted averages with nonnegative weights.
4justinpombrio
Aha. So if a sum of non-negative numbers converges, than any rearrangement of that sum will converge to the same number, but not so for sums of possibly-negative numbers? Ok, another angle. If you take Christiano's lottery: X∞=12X0+14X1+18X2+116X4... and map outcomes to their utilities, setting the utility of X0 to 1, of X1 to 2, etc., you get: 1/2+1/2+1/2+1/2+... Looking at how the utility gets rearranged after the "we can write X∞ as a mixture" step, the first "1/2" term is getting "smeared" across the rest of the terms, giving: 3/4+5/8+9/16+17/32+... which is a sequence of utilities that are pairwise higher. This is an essential part of the violation of Antisymmetry/Unbounded/Dominance. My intuition says that a strange thing happened when you rearranged the terms of the lottery, and maybe you shouldn't do that. Should there be another property, called "Rearrangement"? Rearrangement: you may apply an infinite number of commutivity (x+y=y+x) and associativity ((x+y)+z=x+(y+z)) rewrites to a lottery. (In contrast, I'm pretty sure you can't get an Antisymmetry/Unbounded/Dominance violation by applying only finitely many commutivity and associativity rearrangements.) I don't actually have a sense of what "infinite lotteries, considered equivalent up to finite but not infinite rearrangements" look like. Maybe it's not a sensible thing.
2Slider
I am having trouble trying to translate between infinity-hiding style and explicit infinity style. My grievance with might be stupid. 12X0 split X_0 into equal number parts to final form 12(ϵX0+ϵX0+ϵX0+...) move the scalar in 14ϵX0+18ϵX0+116ϵX0+... combine scalars ϵ4X0+ϵ8X0+ϵ16X0+...  Take each of these separately to the rest of the original terms (ϵ4X0+14X1)+(ϵ8X0+18X2)+(ϵ16X0+116)+... Combine scalars to try to hit closest to the target form 12(ϵ2X0+12X1)+14(ϵ2X0+12X1)+18(ϵ2X0+12X1)+... ϵ2X0+12X1is then quite far from 12X0+12X1  Within real precision a single term hasn't moved much ϵ2X0+12X1∼12X1 This suggests to me that somewhere there are "levels of calibration" that are mixing levels corresponding to  members of different archimedean fields trying to intermingle here. Normally if one is allergic to infinity levels there are ways to dance around it / think about it in different terms. But I am not efficient in translating between them.
2Slider
New attempt X∞=12X0+14X1+18X2+116X4+… I think I now agree that X0 can be written as 12X0+14X0+18X0...  However this uses a "de novo" indexing and gets only to 12 (12X0+14X0+18X0...)+14X1+18X2+116X4+… taking terms out form the inner thing crosses term lines for the outer summation which counts as "messing with indexing" in my intuition. The suspect move just maps them out one to one (14X0+14X1)+(18X0+18X2)+(116X0+116X4)+... But why is this the permitted way and could I jam the terms differently in say apply to every other term (14X0+14X1)+(18X2)+(18X0+116X4)+132X8+(116X0+164X16)+... If I have (a∑i=0xi)+(a∑j=0yj) I am more confident that they "index at the same rate" to make c∑u=0xu+yu. However if I have (a∑ixi)+(b∑jyj) I need more information about the relation of a and b to make sure that mixing them plays nicely. Say in the case of b=2a then it is not okay to think only of the terms when mixing.
2mikehawk
I had the same initial reaction. I believe the logic of the proof is fine (it is similar to the Mazur swindle), basically because it it not operating on real numbers, but rather on mixtures of distributions.  The issue is more: why would you expect the dominance condition to hold in the first place?  If you allow for unbounded utility functions, then you have to give it up anyway, for kind of trivial reasons. Consider two sequences Ai and Bi of gambles such that EA_i<EB_i and sum_i p_iEA_i and sum_i p_i EB_i both diverge. Does it follow that E(sum_i p_iA_i)< E(sum_i p_i B_i) ? Obviously not, since both quantities diverge. At best you can say <=. A bit more formally;  in real analysis/measure theory one works with the so-called extended real numbers, in which the value "infinity" is assigned to any divergent sum, with this value assumed to be defined by the algebraic property x<=infinity for any x. In particular, there is no x in the extended real numbers such that infinity<x. So at least in standard axiomatizations of measure theory, you cannot expect the strict dominance condition to hold in complete generality; you will have to make some kind of exception for infinite values. Similar considerations apply to the Intermediate Mixtures assumption.  
2Slider
With surreals I might have transfinite quantities that can reliably compare every which way despite both members being beyond a finite bound. For "tame" entities all kinds of nice properties are easy to get/prove. The game of "how wild my entities can get while retaining a certain property" is a very different game. "These properties are impossible to get even for super-wild things" is even harder. Mazur seems (atleast based on the wikipedia article) not to be a proof of certain things, so that warrants special interest whether the applicability conditions are met or not.
1Jalex Stark
The sum we're rearranging isn't a sum of real numbers, it's a sum in ℓ1. Ignoring details of what ℓ1 means... the two rearrangements give the same sum! So I don't understand what your argument is. Abstracting away the addition and working in an arbitrary topological space, the argument goes like this: L=limxn=limyn. For all n,f(xn)=0 and f(yn)=1. Therefore, f is not continuous (else 0 = 1).
2Slider
if ℓ1 is something weird then I don't neccesarily even know that x+y=y+x, it is not a given at all that rearrangement would be permissible. In order to sensibly compare limxn and limyn it would be nice if they both existed and not be infinities. L=limxn=limyn=∞ is not useful for transiting equalities between x and y.
1Jalex Stark
L is not equal to infinity; that's a type error. L is equal to 1/2 A_0 + 1/4 A_1 + 1/8 A_2 ... ℓ1 is a bona fide vector space -- addition behaves as you expect. The points are infinite sequences (x_i) such that ∑i|xi| is finite. This sum is a norm and the space is Banach with respect to that norm. Concretely, our interpretation is that x_i is the probability of being in world A_i. A utility function is a linear functional, i.e. a map from points to real numbers such that the map commutes with addition. The space of continuous linear functionals on ℓ1 is ℓ∞, which is the space of bounded sequences. A special case of this post is that unbounded linear functionals are not continuous. I say 'special case' because the class of "preference between points" is richer than the class of utility functions. You get a preference order from a utility function via "map to real numbers and use the order there." The utility function framework e.g. forces every pair of worlds to be comparable, but the more general framework doesn't require this -- Paul's theorem follows from weaker assumptions.
1Slider
The presentation tries to deal with unbounded utilities. Assuming ∑i|x1| to be finite exludes the target of investigation from the scope. Supposedly there are multiple text input methods but atleast on the website I can highlight text and use a f(x) button to get math rendering. I don't know enough about the fancy spaces whether a version where the norm can take on transfinite or infinidesimal values makes sense or that the elements are just sequences without a condition to converge. Either (real number times a outcome) is a type for which finiteness check doesn't make sense or the allowable conversions from outcomes to real numbers forces the sum to be bigger than any real number.
1Jalex Stark
Requiring ∑i|xi| to be finite is just part of assuming the xi form a probability distribution over worlds. I think you're confused about the type difference between theAi and the utility of Ai. (Where in the context of this post, the utility is just represented by an element of a poset.) I'm not advocating for or making arguments about any fanciness related to infinitesimals or different infinite values or anything like that.

Maybe I'm just a less charitable person - it seems very easy to me for someone to say the words "I have unbounded utility" without actually connecting any such referent to their decision-making process.

We can show that there's a tension between that verbal statement and the basic machinery of decision-making, and also illustrates how the practical decision-making process people use every day doesn't act like expected utilities diverge.

And I think the proper response to seeing something like this happen to you is definitely not to double down on the verbal statement that sounded good. It's to stop and think very skeptically about whether this verbal statement fits with what you can actually ask of reality, and what you might want to ask for that you can actually get. (I've written too many posts about why it's the wrong move to want an AI to "just maximize my utility function." Saying that you want to be modeled as if you have unbounded utility [of this sort that lets you get divergent EV] is the same order of mistake.)

If you think people can make verbal statements that are "not up for grabs," this probably seems like gross uncharitableness.

6paulfchristiano
I can easily imagine people being mistaken about "would you prefer X or Y?" questions (either in the sense that their decisions would change on reflection, or their utterances aren't reflective of what should be rightly called their preferences, or whatever). That said, I also don't think that it's obvious that uncertainty should be represented as probabilities with preferences depending only the probability of outcomes. That said, all things considered I feel like bounded utility functions are much more appealing than the other options. Mostly I wrote this post to help explain my serious skepticism about unbounded utility functions (and about how nonchalantly the prospect of unbounded utility functions is thrown around).
4Davidmanheim
Just posting to say I'm strongly in agreement that unbounded utility functions aren't viable - and we tried to deal with some of the issues raised by philosophers, with more or less success, in our paper here: https://philpapers.org/rec/MANWIT-6
5Davidmanheim
This is basically what I tried to argue in my preprint with Anders on infinite value - to quote: "We have been unfortunately unable to come up with a clear defense of the conceivability of infinities and infinitesimals used for decisionmaking, but will note a weak argument to illustrate the nonviable nature of the most common class of objection. The weak claim is that people can conceive of infinitesimals, as shown by the fact that there is a word for it, or that there is a mathematical formalism that describes it. But, we respond, this does not make a claim for the ability to conceive of a value any better than St. Anselm’s ontological proof of the existence of God. More comically, we can say that this makes the case approximately the same way someone might claim to understand infinity because they can draw an 8 sideways — it says nothing about their conception, much less the ability to make decisions on the basis of the infinite or infinitesimal value or probability. "
1FireStormOOO
This seems plausible to me for people who don't live and breathe math but still think Expected Utility is a tool they can't afford not to use.  I would be surprised if the typical person, even here, picks up the subtlety with any of the infinite sums and weird implication of that on the first pass.  I don't think infinite sums (and their many pitfalls) are typically taught at all until Calc II, which is not even a graduation requirement for non-STEM undergrad degrees. People also get a lot of mileage out of realizing that IRL most problems aren't edge cases and even fewer are corner cases - rightly skipping most of the rigor that's necessary when discussing philosophy and purposely seeking out weird edge cases. Now if someone is actually well versed in the math and philosophizing and saying that understanding all the implications that's an interesting discussion I want to read.

People like talking about extensions of the real numbers, but those don't help you avoid any of the contradictions above. For example, if you want to extend < to a preference order over hyperreal lotteries, it's just even harder for it to be consistent.

I'm a recent proponent of hyperreal utilities. I totally agree that hyperreals don't solve issues with divergent / St. Petersburg-style lotteries. I just think hyperreals are perfect for describing and comparing potentially-infinite-utility universes, though not necessarily lotteries over those universes. (This doesn't contradict Paul; I'm just clarifying.)

Separately, while cases like this do make it feel like we "should give up on probabilities as an abstraction for describing uncertainty," this conclusion makes me feel quite nihilistic about decision-under-uncertainty; I will be utterly shocked if "a totally different kind of machinery to understand or analyze those preferences" is satisfactory.

4paulfchristiano
My main concern is that unbounded utilities (and hence I assume also hyperreal utilities, unless you are just using them to express simple lexical preferences?) have a really hard time playing nicely with lotteries. But then if you aren't describing preferences over lotteries, why do you want to have scalar utilities at all? I think I'd be less shocked by a totally different framework than probability, but I do agree that it looks kind of bleak. I would love to just reject unbounded utilities out of hand based on this kind of argument, and personally I don't find unbounded utilities very appealing, but if someone swings that way I don't feel like you can very well tell them to just change their preferences.
7abramdemski
I'm also a fan of hyperreal probabilities/utilities -- not that I think humans (do/should) use them, particularly, but that I think they're not ruled out by appealing rationality principles. I think the Jeffrey-Bolker axioms are more appealing than lottery-based elucidations of utility theory, and don't have the sorts of problems you're pointing to. In particular, you can just drop their version of the continuity axiom, which Jeffrey also feels is unmotivated. Preferences can be represented by a probability distribution and an expected-utility distribution, but you can't necessarily go the other way, from arbitrary probabilities and utilities to coherent preferences. So you can't just define arbitrary lotteries like you do in the OP. The agent has to actually believe these to be possible.  So, rather than an impossibility result for unbounded utilities, I think you get that unbounded utilities are only consistent with specific beliefs. EDIT: I still have to think about this more, but I think I misrepresented things a little. If we want expectations to be real-valued, but unbounded, then I think we can keep all the axioms including continuity, but the agent can't believe in the possibility of lotteries corresponding to divergent sums. If we are OK with hyperreal values, then I think we drop continuity, and belief in lotteries with divergent sums are OK (but we still don't need to deal with arbitrary lotteries, because that's just not a very natural thing to do in the JB framework -- we only want to work with what an agent believes is possible). However, it's possible that your Dominance axiom gets violated. Intuitively, I don't think this is a necessary thing (IE, it seems like we could add a Dominance-like axiom). In particular, your arguments don't go through, since we don't get to construct arbitrary lotteries; we only get to examine what the agent actually believes in and has preferences about. (So, to work with arbitrary lotteries, you have to add axioms ass
1davidad
I’m really interested in this direction (largely because I’m already interested in pointless-topology/geometric-logic approaches to world-modeling), but I have a couple concerns off the bat. (Maybe if I read more about Jeffrey-Bolker and the surrounding literature I can answer my own questions here, but I thought I’d ask now anyway.) 1. One of the neat things about the standard interpretation of vNM is that it gives me an algorithmic recipe for (a) eliciting my beliefs-and-preferences about simple events, and then (b) deducing uniquely-valid consequences about what my preferences about complex events have to be (by computing integrals), so I don’t have to think about the complex events directly. Is there anything analogous to this in the Jeffrey-Bolker world? 2. If there is, can I apply it to ordinary lotteries that I definitely believe are possible, like prediction-market payouts? 3. If so, what does it look like mechanistically when I try to apply it to a St. Petersburg lottery? Where exactly are the guardrails between ordinary lotteries and impossible lotteries, and how might I come to realize (from within the Jeffrey-Bolker framework) that I’m not allowed to believe that a St. Petersburg lottery is real?
3abramdemski
Jeffrey does talk about this in his book! Denote the probability of an event as P(E), and the expected value as V(E). Now suppose we cut an event A into parts B and C. We must have that V(A) = V(B)P(B|A) + V(C)P(C|A). Using this, we can cut the world up into small events which we're comfortable assigning values to, and then put things back together into expected values for larger events. Basically exactly what you'd do normally, but no events are distinguished as "outcomes" so you can start wherever you want. The JB axioms don't assume anything like countable additivity, so an event like "St. Petersburg Lottery" needs a valuation which is consistent (in the above-mentioned sense) with the value of all other events, but there isn't (necessarily) a computation which tells you the value of the infinite sum, the way there is for finite sums. We can add axioms which constrain values in situations like that, to avoid absurd things; but since it isn't clear how to evaluate infinite sums in general, it makes sense to keep those axioms weak. I think of this as a true result: the value of infinite lotteries is subjective (even after we know how to value all their finite parts). It has a lot of coherence constraints, but not enough to fully pin down a value. This means we don't have to worry about all the messy nonsense of trying to evaluate divergent sums. However, I think if we assume that any sub-event of the st petersburg lottery in which we still have a chance of winning something has a positive value (which seems very reasonable), then we can prove that the total value of the lottery is not any real number, by splitting off more and more of the finite sub-lotteries and arguing that the total value must exceed each. The continuity axiom is what stops you from having non-archimedean values (just like with VNM), so that's where the buck stops. If our preferences respect continuity, then we have to choose between believing St-petersburg-like lotteries are possible, vs be
1Zach Stein-Perlman
I think assigning real (or hyperreal) values to possible universes can give really aesthetic properties (edit: at least to me; probably much less aesthetic for those who "don't find unbounded utilities very appealing") that I'd roughly call "additivity" or "linearity," like: if A and B are systems, U(universe containing A) + U(universe containing B) = U(universe containing A and B). (This assumes that value is local, or something, which seems reasonable.) Utilities contain more than just lexical-ordering information if we can use them to describe the utility of new possible universes. Perhaps more importantly, real and hyperreal utilities seem to play nice with finite lotteries, which seems quite desirable (and quite enough reason to have scalar utilities), even though it isn't as strong as we'd hope.

I think the dominance principle used in this post is too strong and relatively easy to deny. I think that the Better impossibility results for unbounded utilities are actually significantly better.

4Raemon
This seems useful to be flagged as a review, so it shows up in some review UI later. Mind if I convert it? (You can create reviews by clicking the Review button at the top of the post)
2niplav
I guess in the context of the Review I consider the two posts as one.
4paulfchristiano
I think that's reasonable, this is the one with the discussion and it has a forward link, would be better to review them as a unit.

The examples in this post all work by not just having divergent sums but unbounded single elements. When I look at this, my immediate takeaway is that we need a model that includes time. Outcomes should be of the form , where indicates at which timestep they happen.

You are then allowed to look at infinite sequences iff the are strictly increasing. The sums do not need to converge, but there does need to be a global bound for all individual utilities, i.e. that upper-bounds all .

This avoids all the problems i... (read more)

4paulfchristiano
Yes, I think that having bounded single elements but infinitely big universes is potentially fine. Though if the utilities of worlds are described by unboundedly-big numbers then of course you have exactly the same problem over worlds. See Joe's recent post On infinite ethics which prompted this post. I was especially responding to Part X which relied on the assumption that individual experiences can be arbitrarily good in order to argue that UDASSA-like schemes don't really avoid the trouble with infinities. But I think they do avoid the distinctive trouble with infinitely-big universes, and that arbitrarily-good experiences are more deeply problematic in their own right.
2AlexMennen
Replacing single utilities with time-indexed sequences of utilities doesn't help for representing preferences. If you have to make a decision between two options, each of which will result in a different sequence of time-indexed utilities, you still need to decide which option is better overall, which means you'll need a one-dimensional scale to compare these utility-sequences on. The VNM theorem tells you that, under certain fairly weak assumptions, a single real-valued utility is the appropriate measure to use for this.
2Slider
Reals might not be "continuous enough" to do the job. A hard limit case. Option A: 1 utility on day 1, 0 utility for the rest of days Option B: 2 utility on day 1, 0 utility for the rest of days Option C: 3 utility on day 1, 0 utility for the rest of days Option D: 1 utility on day 1, 1 utiltity for the rest of days Option E: 2 utility on day 1, 1 utility for the rest of days Continuity means when L<M<N then there should a p such that pL+(1−p)N∼M So there are values p1A+(1−p1)C∼B and p2A+(1−p2)D∼B and p3A+(1−p3)E. If p1, p2 adn p3 are from the reals and different then they should be finite multiples of each other. So while one can do with one real to differentiate between A,B,C and D,E to me it seems the jump between the types of cases is not finite and the reals can't provide that at the same time as keeping the resolution on differentiating betwen day 1 utilities. With surreals the probabilities could be infinidesimal and the missing probabilities exist.
6AlexMennen
If you have surreal-valued utilities, you can just round infinitesimals to 0 to get real-valued utilities, and then continuity can be satisfied with real-valued probabilities again. The resulting real-valued utility function is correct about your preferences whenever it assigns higher utility to one option than the other, and is deficient only in the case where it assigns the same utility to two different options that you value differently. But it is very unlikely for two arbitrary reals to be exactly the same, and even when this does happen, the difference is infinitesimally unimportant compared to other preferences, so this isn't a big loss.

Note: I've turned on two-axis voting for this post to see if it's helpful for this kind of discussion.

Relevant comment from the sequences (I had this in mind when writing parts of the OP but didn't remember who wrote it, and failed to recognize the link because it was about Newcomb's problem):

Another example:  It was argued by McGee that we must adopt bounded utility functions or be subject to "Dutch books" over infinite times.  But:  The utility function is not up for grabs.  I love life without limit or upper bound:  There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.00

... (read more)

(Note: I've edited this comment a lot since first posting it, mostly small corrections, improving the definition of the order to be better behaved and adding more points.)

I think orders which satisfy (Symmetric) Unbounded Utilities but not (Weak) Dominance or Homogeneous Mixtures can be relatively nice, so we shouldn't be too upset about giving up (Weak) Dominance and Homogenous Mixtures. Basically no nice order will be sensitive to the kinds of probability rearrangements you've done (which we can of course conclude from your results, if we want "nice" to ... (read more)

2davidad
I think this order does satisfy Homogeneous Mixtures, but not Intermediate Mixtures. Homogeneous Mixtures is a theorem if you model lotteries as measures, because it’s asking that your preference ordering respect a straight-up equality of measures (which it must if it’s reflexive). Intermediate Mixtures and Weak Dominance are asking that your preference ordering be willing to strictly order mixtures if it would strictly order their components in a certain way, and the ordering you’ve proposed preserves sanity by sometimes refusing to rank pathological mixtures.
2MichaelStJules
Hmm, I do think Intermediate Mixtures is violated, and in a very bad way: it can flip the order. Consider 1. A=0, constant 2. B=∑∞i=112iδ3i=3i with probability 2i for each i=1,2,…. Note that B>A. Let's check if B>12A+12B. For a given q sufficiently close to 1 (away from 1/2, at least), the integral of the integral of the quantile for 12A+12B will be much further into the series terms of B than the integral of the quantile B, because 12A+12B has to cram the probabilities of B into half as much space. Because the expected value of the terms grow exponentially, halving the probabilities is outweighed by summing the series at a faster rate (with respect to the probabilities). In other words, the quantile integral of 12A+12Bdiverges to infinity faster than B's. So, I think it's actually the case that B<12A+12B, which seems very bad, bad enough to reject this approach. (I think the order I defined here will be better behaved.)
1MichaelStJules
Which equality (not preference equivalence) of measures are you talking about for Homogenous Mixtures? The order doesn't satisfy Homogenous Mixtures, but maybe it also doesn't satisfy Intermediate Mixtures. For Homogenous Mixtures, using lotteries over actual utilities, i.e. the payoff/outcome is its utility, for i=0,1,2,…, 1. X=0 2. Ai=12δ2i+12δ−2i=2i or −2i, each with probability 1/2. This is equivalent to X=0 since we're ranking based on expected utility when both lotteries are bounded. 3. pi=12i+1 It doesn't order ∑∞i=1piAi vs X, because fixing q and taking p→0, the integral for ∑ipiAi≥0 diverges to −∞, and the integral for 0≥∑ipiAi diverges to −∞. Note that ∑∞i=1piAi=12A++12A−, where A+=∑∞i=012i+1δ2i,   A−=∑∞i=012i+1δ−2i,  so a mixture of two lotteries, one diverging to +∞, and the other diverging to −∞. On the other hand, if you require q=1−p, then the integral will actually be 0 for each p for this particular lottery, since the lottery is symmetric around p=12, so you do get equivalence. I suspect we can come up with another counterexample by messing around with how fast each tail is approached and get the liminf of the integral to be positive, and so rank ∑ipiAi>0. Maybe instead defining Ai this way would work: Ai=(1−11+2i)δ2i+11+2iδ−4i. The idea is that the negative terms get much less far for the same p, because far more of the weight is in much lower probability events. The integral for ∑ipiAi≥0 is roughly counting the number of positive terms whose probability is above cp and subtracting the number of negative terms whose probability is above cp, for some constant c (I can't be bothered to figure out exactly which c). This should go to +∞ as p→0. I think you can make things worse, too, again with q=1−p. You can choose Ai<0 for each i, but have ∑ipiAi>0 by replacing the −4i with −4i−1. I think we can even get the gap between Ai and 0 to diverge as i→∞, with something like −4i−i(1+2i) or even −4i−3i instead of −4i. If we allow p and q to vary
2MichaelStJules
Here are some ways to get more strict inequalities (less incomparability or equivalence): 1. Require q=1−p to handle some more cases with both positive and negative expected infinities, but I'm not sure that the results would always be intuitive. There might be other relationships between p and q that that depend on the particular lotteries that work better. You could test the lim infs under multiple relationships, q=1−f(p) for different f from a specific set. 2. Replace the strict inequality condition with lim(p,q)→(0,1),0<p<q<1sgn(∫qpQU(B)(t)−QU(A)(t)dt)=1. Equivalently, there are p0,q0,0<p0<q0<1 such that ∫qpQU(B)(t)−QU(A)(t)>0 for all p,q,0<p<p0<q0<q<1. A<B would mean that the integral for A never catches up with that for B in the limit.
1MichaelStJules
p.37-38 in Goodsell, 2023 gives a better proposal, which is to clip/truncate the utilities into the range [−t,t] and compare the expected clipped utilities in the limit as t→∞. This will still suffer from St Petersburg lottery problems, though.

Maybe some of the problem is coming from trying to extend dominance to infinite combinations of lotteries. If we're saying that the utility function is the thing that witnesses some coherence in the choices we make between lotteries, maybe it makes sense to ask for choices between finite combinations of lotteries but not infinite ones? Any choice we actually make, we end up making by doing some "finitary" sort of computation (not sure what this really means, if anything), and perhaps in particular it's always understandable as a choice between finite lotte... (read more)

6paulfchristiano
I think you avoid any contradiction if you reject Weak Dominance but accept a finite version of Dominance. For example, in that case you can simply declare all lotteries with infinite support to be incomparable to each other or to any finite lottery. If you furthermore require your preferences to be complete, even when asking about infinite lotteries, such that either A>B or A>B or A=B, then I suspect you are back in trouble. But if you just restrict preferences to finite lotteries you are fine and can compare them with expected value.
2TekhneMakre
Yeah, maybe just truncating off finitely many summands in an infinite lottery induces constraints that force your examples to have infinite value? Maybe you can have complete hyperreal-valued preferences and finite dominance...?

Another common way out is to assume that any two "infinitely good" outcomes are incomparable, and therefore to reject Dominance.[3] This results in being indifferent to receiving $1 in every world (if the expectation is already infinite), or doubling the probability of all good worlds, which seems pretty unsatisfying.

Why are these unsatisfying? Intuitively, if I already have infinite money (in expectation) then why should I care about getting to infinite + 1 or infinite x 2 money?

The point you mention about all decisions having infinite utility in exp... (read more)

7paulfchristiano
To take it further, suppose that with 1% probability you are able to play a St. Petersburg game, and in the other 99% of worlds there is a billion years of torture. Then the story is that you don't care about whether the probabilities are 1% and 99%, or 99% and 1%. Whether or not you find that unsatisfying is a personal call, but I find it extremely bad. (But this proof doesn't show that's an inevitable consequence of Unbounded Utilities, it just shows that violating Dominance is an inevitable conclusion. So you might well think that this torture case is pretty unsatisfying but you can take or leave Dominance itself. I think that's not crazy, but I think you'd be able to run a similar argument to get to any particular unsatisfying Dominance-violation.) (I personally find violating Weak Dominance much more surprising, and that's the point where I'm saying that you should just give up on talking about probabilistic mixtures. Though that may be too drastic. I'm phrasing this whole post in terms of dominance principles because I want to make the point that unbounded utilities basically force you to abandon very basic parts of your decision-theoretic machinery so you shouldn't go on as if you have unbounded utilities but an otherwise normal decision theory.) Basically just Pascal's mugging. Under universal / non-dogmatic distributions, there is some probability on "Someone controls the universe, specifically searches for a series of outcomes with really large utility, and then runs the St. Petersburg game." (Of course for aggregative utilitarians you don't even need to go there, any not-insane probability distribution over the size of the reachable universe is just obviously going to have infinite expectation.)
1Signer
If infinitely valuable outcome is possible at all (has non-zero probability) after decision, then multiplying infinite utility by any non-zero probability you always get infinite expected utility.
2Slider
if the system allows infinidesimals this need not be the case

This argument is extremely similar to Beckstead and Thomas' argument against Recklessness in A paradox for tiny probabilities and enormous values.

If I understand correctly, this argument also appeared in Eliezer Yudkowsky's post "The Lifespan Dilemma", which itself credits one of Wei Dai's comment here. The argument given in The Lifespan Dilemma is essentially identical to the argument in Beckstead and Thomas' paper.

6paulfchristiano
I think Eliezer and Wei Dai's comments (and the early part of Beckstead and Thomas) are just direct intuitive arguments against Recklessness. This post (and the later part of Beckstead and Thomas) argue that Recklessness is not merely intuitively unappealing, but that it requires violating pretty weak dominance principles. You have to believe that there is a set of lotteries Ai each individually better than X, whose mixture is not at least as good as X. Someone who already bought the intuitive argument against Recklessness doesn't need to read these posts; they are for someone who already bit the bullet on the lifespan dilemma and wants more bullets.

I spent some time trying to fight these results, but have failed!

Specifically, my intuition said we should just be able to look at the flattened distributions-over-outcomes. Then obviously the rewriting makes no difference, and the question is whether we can still provide a reasonable decision criterion when the probabilities and utilities don't line up exactly. To do so we need some defined order or limiting process for comparing these infinite lotteries.

My thought was to use something like "choose the lottery whose samples look better". For instance, exa... (read more)

I really like this post because it directly clarified my position on ethics, namely making me abandon unbounded utilities. I want to give this post a Δ and +4 for doing that, and for being clearly written and fairly short.

One interesting case where this theorem doesn't apply would be if there are only finitely many possible outcomes. This is physically plausible: consider multiplying the maximum data density¹ by the spacetime hypervolume of your future light cone from now until the heat death of the universe.

¹ <https://physics.stackexchange.com/questions/2281/maximum-theoretical-data-density>

I just quickly browsed this post. Based on the overall topic, you might also be interested in these inconsistency results in infinitary utiliatarianism written by my PhD advisor (a set theorist) and his wife (a philosopher).

http://jdh.hamkins.org/infinitary-utilitarianism/

Your proofs all rely on lotteries over infinite numbers of outcomes. Is that necessary? Maybe a restriction to finite lotteries avoids the paradox.

Thank you for making explicit the idea that the problems with "unbounded utilities" don't even require the existence of a utility function, just a very weak assumption about preference ordering with respect to probability mixtures and the existence of "arbitrarily strong" outcomes.

I should note that this is more or less the same thing that Alex Mennen and I have been pointing out for quite some time, even if the exact framework is a little different. You can't both have unbounded utilities, and insist that expected utility works for infinite gambles.

IMO the correct thing to abandon is unbounded utilities, but whatever assumption you choose to abandon, the basic argument is an old one due to Fisher, and I've discussed it in previous posts! (Even if the framework is a little different here, this seems essentially similar.)

I'm glad t... (read more)

4paulfchristiano
I agree that "unbounded utilities" don't refer to anything at all in the usual sense of "utility function" and that this observation is basically as old as VNM itself. I usually cite de Blanc 2007 to point out that unbounded utilities are just totally busted for non-dogmatic priors (but this is also a formalization of a much older argument about "contagion"). The point of these posts was to observe that this isn't just an artifact of utility functions, and that changing the formalism doesn't help you get around the problems. So this isn't really an argument against utility functions, it's a much more direct argument against a certain kind of preferences. There just don't exist any transitive preferences with unbounded-utility-like-behavior and weak outcome-lottery dominance.
4Sniffnoy
Oh, that's a good citation, thanks. I've used that rough argument in the past, knowing I'd copied it from someone, but I had no recollection of what specifically or that it had been made more formal. Now I know! My comment above was largely just intended as "how come nobody listens when I say it?" grumbling. :P

Promoted to curated: I've been thinking about unbounded utilities for a while, and while I agree with a bunch of the top commenters on the actual relevant element being the integrateability of utilities, and that this can save some unbounded utility assignments (in a way that feels relevant to my current beliefs on the topic). 

I do think nevertheless that this post is the best distillation of a bunch of the impossibility arguments I've seen floating around for the last decade, and I think it is an actually important question when trying to decide how to relate to the future. So curating it seems appropriate. 

The examples and results in your post are very interesting and surprising. Thanks for writing this.

I'm inclined to reject the dominance axioms you've assumed, at least for mixtures of infinitely many lotteries. I think stochastic dominance is a more fundamental axiom, avoids inconsistency and doesn't give any obviously wrong answers on finite payoff lotteries (even mixtures of infinitely many lotteries or outcomes with intuitively infinite expected value, including St. Petersburg and Pasadena). See Christian Tarsney's "Exceeding Expectations: Stochastic Do... (read more)

3paulfchristiano
If we define A<B whenever B stochastically dominates A, then I think that you have Dominance (since mixtures preserve stochastic dominance) but not Unbounded Utilities (since it's impossible for a smaller chance of a good outcome to dominate a higher chance of a less-good outcome), right?
1MichaelStJules
If you're defining the order purely based on stochastic dominance (and no stronger), then ya, I think you'll have Dominance but not Unbounded Utilities for the reasons you give. However, I think stochastic dominance is consistent with Unbounded Utilities in general and the expected utility order when choosing between bounded lotteries, since the order based on expected utilities is well-defined and stronger than the stochastic dominance one over bounded lotteries. That is, if A strictly (or weakly) stochastically dominates B, and both are bounded lotteries, then A has a higher expected utility than B (or, at least as high, respectively). So, you could use an order that's at least as strong as using expected utilities, when they're well-defined, and also generally at least as strong as stochastic dominance, but that doesn't imply Dominance for mixtures of infinitely many lotteries. Your specific example with X∞ would prove that Dominance does not hold. Also, if you're extending expected utility anyway, you'd probably want to go with something stronger than stochastic dominance, something that also implies sequential dominance or some kind of independence.
1MichaelStJules
EDIT: p.37-38 in Goodsell, 2023 gives a better proposal, which is to clip/truncate the utilities into the range [−t,t] and compare the expected clipped utilities in the limit as t→∞. This will still suffer from St Petersburg lottery problems, though.   Here's an order that's as strong as both expected utility and stochastic dominance, and overall seems promising to me: tl;dr: For lotteries with finite utility payoffs (but possibly unbounded utility payoffs and infinite expected utility), we can take expectations through any subset with finite and well-defined expected utility, and then compare the resulting lotteries with stochastic dominance. We just need to find any pair of well-behaved "expected utility collapses" for which one lottery stochastically dominates the other. Allowing expected utility collapses over the infinite expected utilities can lead to A<A, so I rule that out. In practice, you might just take one expectation over everything but the top X% and bottom Y% of each lottery, and compare those lotteries with stochastic dominance, for different values of X and Y. This allows you to focus on the tails of heavy-tailed distributions.   For a lottery X, a utility function U, and a countable (possibly finite and possibly empty) set of mutually exclusive non-empty measurable subsets of the measure space, P={Q1,Q2,…,Qn} (or basically a set of binary random variables whose sum is at most 1) and letting PC=(∪Q∈PQ)C be the complement of their union (so, for their indicator binary random variables, 1PC=1−∑Q∈P1Q), the expected utility collapse of X over P is: XP=E[U(X)|Q] if Q, for Q∈P, and XP=X|PC, otherwise. Or, in lottery notation, letting L(c) be the constant lottery with constant value c, XP=P(PC)X|PC+∑Q∈PP(Q)L(E[U(X)|Q]). In other words, we replace probability subsets of X with its expected utility over those subsets. If furthermore, E[U(X)|Q] is well-defined and finite for each Q∈P, we call the expected utility collapse well-behaved.   Then, we
1MichaelStJules
From section 3 of Tarsney's paper:
1[comment deleted]
[-]TLW10

This doesn't hold if you restrict to utility functions that asymptote to a finite value, correct?

[-]LGS10

Very nice, thanks. I agree with others that if one were intent on keeping unbounded utilities, it seems simplest to give up on probability distributions that have infinite support (it seems one can avoid all these paradoxes by restricting oneself to finite-support distributions only). I guess this is similar to what you mean by "abandoning probability itself", but do note that you can keep probability so long as all the supports are finite.