I didn't really read much of the post, but I think you are rejecting weighting people by simplicity unfairly here.
Imagine you flip a fair coin until it comes up tails, and either A) you suffer if you flip >100 times, or B) you suffer if you flip <100 times. I think you should prefer action A.
However if you think about there as being a countable collection of possible outcomes, one for each possible number of flips, you are are creating "infinite" suffering rather than "finite" suffering, so you should prefer B.
I think the above argument for B is wrong and similar to the argument you are giving.
Note that the choice of where we draw the boundary between outcomes mattered, and similarly the choice of where we draw the boundary between people in your reasoning matters. You need to make choices about what counts as different people vs same people for this reasoning to even make sense, and even if it does make sense, you are still not taking seriously the proposal that we care about the total simplicity of good/bad experience rather than the total count of good/bad experience.
Indeed, I think the lesson of the whole infinite ethics thing is mostly just grappling with we ...
This was an excellent post, thanks for writing it!
But, I think you unfairly dismiss the obvious solution to this madness, and I completely understand why, because it's not at all intuitive where the problem in the setup of infinite ethics is. It's in your choice of proof system and interpretation of mathematics! (Don't use non-constructive proof systems!)
This is a bit of an esoteric point and I've been planning to write a post or even sequence about this for a while, so I won't be able to lay out the full arguments in one comment, but let me try to convey the gist (apologies to any mathematicians reading this and spotting stupid mistakes I made):
Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough.
This is where things go wrong. The actual credence of seeing a hypercomputer is zero, because a computationally bounded observer can never observe such an object in such a way that differentiates it from a finite approximation. As such, ...
Rough take on this: to me lot of this reasoning seems not paying close enough attention to the relation of maths and reality; in practice the problems of infinite ethics are more likely to be solved at the level of maths, as opposed on the level of ethics and thinking about what this means for actual decisions.
Why:
General problems with how "infinities" are used in this text is it seems it is importing the assumption that something like ZFC tells us something fundamental and true about reality. Subsequently, lot of problems with infinities seem to be basically "imported from math" (how to sum infinite series?).
I'm happy to bite this bullet:
- our default math being based on ZFC axioms is to a large extend random historical fact
- how ZFC deals with infinities tells us very little about real infinities
- default infinite ethics tells us something about ethical problems in ZFC-based matemathical universes; as I don't assume ZFC is some fundamental base of my reality, it's problems, questions and answers about infinities do not seem particularly relevant.
Ad absurdum: if we postulated as an axioms reality is based on wiggling of big elephants standing on the back of an e...
...Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.
Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.
Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It
If we accept that there's nonzero probabilities that our actions impact diverse infinite possibilities, and yet we still want to make decisions, it seems the entire infinitarian project is sunk already. Where then do you part ways with the completely bog-standard derivations (e.g. Savage) of expected utility theory that tell you you will act as if you assign regular old numbers to states of the universe?
In other words, great post, but I don't see why you're so convinced that better understanding of infinite ethics will require far-out math rather than prosaic, already-discovered math.
"Anyone got the gods' exact wording, there? For comparison, my home plane doesn't have a spatial boundary in any direction and doesn't spatially loop, but on the lower level of reality underneath that, there were limits on the size of structures that could exist and any two identical structures of entanglement were the same structure at that underlying level of reality. It didn't actually contain an infinite amount of stuff; it was repeating at a lower level than space looping around. If Elysium is infinite, nonrepeating, contains arbitrarily large entangled structures, and everywhere comprises a similar positive density of realityfluid, that literally breaks the Law of Probability I know."
" - oh dear. Uh, I don't know that, but I guess we can have someone look it up. ....why does it break the Law of Probability if there are planes that go on forever?" What a good thing for Keltham to be extremely worried about.
"Well, let's say you have an infinite number of INT 16 wizard students, of whom an infinite number become 5th-circle wizards, what's your chance of becoming a 5th-circle wizard given that you're an INT 16 wizard student?"
..."I don't think all infinities are th
I did not see this post when it was first put on the forum, but reading it now, my personal view of this post is that it continues a trend of wasting time on a topic that is already a focus of too much effort, with little relevance to actual decisions, and no real new claim that the problems were relevant or worth addressing.
I was even more frustrated that it didn't address most of the specific arguments put forward in our paper from a year earlier on why value for decisionmaking was finite, and then put forward seeral arguments we explicitly gave reasons ...
Hi David -- it's true that I don't engage your paper (there's a large literature on infinite ethics, and the piece leaves out a lot of it -- and I'm also not sure I had seen your paper at the time I was writing), but re: your comments here on the ethical relevance of infinities: I discuss the fact that the affectable universe is probably finite -- "current science suggests that our causal influence is made finite by things like lightspeed and entropy" -- in section 1 of the essay (paragraph 5), and argue that infinites are still
Some of the logic uses that because a chance is positive then it must be finite. In the infinite context infinidesimal chances might be relevant and can break this principle. For expected value calculations it helps that for any transfinite payoff there is a corresponding infinidesimal chance which would make that option ambivalent with a certain finite payout. And for example 4 times the lizards this threshold would be 4 times lower. Mere possiblity giving a finite (even if small) chance seems overgenerous althought I would expected the theory on what kin...
Good post.
In some places, you seem to assume that infinity looks like ∞ rather than ω. (∞ is not a number and just means roughly bigger than all real numbers, while ω [in the hyperreal sense] is a particular number bigger than all real numbers.) For example:
consider an infinite world where everyone’s at 1. Suppose you can bump everyone up to 2. Shouldn’t you do it? But the “total welfare” is the same: ∞.
and
...if the total is infinite (whether positive or negative), then finite changes won’t make a difference. So the totalist in an infinite world starts
Curated.
I think the topic of infinite ethics is pretty confusing, and important.
I haven't actually read the entirety of this post (it sure is long). I've read large chunks of the beginning, and end, and spot-checked some of the arguments in the middle. The broad structure of Joe's arguments make sense to me, and seem important for bullet-biting utilitarians to engage with. I'm interested in seeing more engagement with individual points here.
Here and on the EA Forum, some commenters have suggested that (some of) our problems with infinities arise from ZF set theory (and others have expressed skepticism, which I tentatively share). I would love to see a post (high-effort or low-effort, long or short) on how an alternative mathematical foundation would not raise some issues Joe discusses.
I am confused how you got to the point of writing such a thoroughly detailed analysis of the application of the math of infinities to ethics while (from my perspective) strawmanning finitism by addressing only ultrafinitism. “Infinities aren’t a thing” is only a "dicey game" if the probability of finitism is less than 100% :). In particular, there's an important distinction between being able to reference the "largest number + 1" and write it down versus referencing it as a symbol as we do, because in our referencing of it as a symbol, in the original fram...
"An infinite line of immortal people, numbered starting at 1, who all start out happy (+1). " Are you allowed to do this? Say I am one of these people, how long is my number likely to be? Can my number be described with a finite number of symbols? Isn't my position determined by a draw from a uniform distribution over the positive integers, which I think isn't allowed? https://math.stackexchange.com/questions/14777/why-isnt-there-a-uniform-probability-distribution-over-the-positive-real-number
...But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a
I think part of the problem here comes when you consider an infinite number of people. Lets say that anything involving too many bits of memory is not a person. Yes, this implies we don't care about 3^^^^3 sized minds, but we can care about more reasonably sized subsections of those minds. So we have some finite list of computations we consider people. So what you care about is the amount of magic reality fluid that gets applied to each computation. (The total amount of magic reality fluid must add up to 1)
In this view, there is no difference between a single person in cyclic space and an infinite row of identical people. They are both just one computation.
Thoughts while reading this, especially as they relate to realityfluid and diminishing-matteringness in the same vein as "weight by simplicity":
To extend Expansionism to worlds with less structure, you could try to come up with a very general kind of distance metric between locations of value (whatever they may be), and use that instead of distance in spacetime to define your "spheres". I'm not sure this can cover all possibilities we'd want to consider while also ensuring the spheres only contain finitely many locations of value with nonzero value at a time, for finite partial sums over each sphere.
Here's one idea, although I'm skeptical that it works. If there are only countably many binary pred...
For the hyperreal approaches, ultrafilters are basically just orders over the locations of value to take limits over.
It's worth noting that the better-behaved "finite-sum" version of the hyperreal approach outlined in Bostrom (2011) is just choosing an order to take partial sums and then limits over, and he describes Expansionism (over spacetime locations) as a special case. This is on pages 21 and 22.
You could do Expansionism over possible persons instead of spacetime locations to satisfy Pareto over persons/agents and avoid the weird issues with pulling ...
[ epistemic status: feeling ignorant - unsure if I'm misinterpreting the claims, or disagreeing, or just surprised that this is presented as more settled and agreed than I expected. ]
First point of confusion: literally infinite, in the sense of moral-patient experiences? This implies that everything anyone can imagine (in our finite brains) happens, and an uncountable of unimaginable things also happens. Or do you mean more figuratively infinite - a very (VERY) large number of possible experiences, and a smaller-but-still large number of actual...
You mention that there are finite fanaticism problems (in addition to infinite ones), but I don't think you illustrated this. So just in case a reader is inclined to think they can solve fanaticism by somehow ignoring infinity—which would make ignoring infinity more appealing—here's an example of how you're still left with fanaticism:
We should have credence at least that long-term value is not linear in resources, but exponential, and then this possibility dominates our expected utility, so that rather than maximizing expected resources we do somethi...
Slightly orthogonal, but I think you are rating some of your infinite worlds really badly.
You say that you like the one you call "Zone of happiness" (you would rather live there than some of the others). This strikes me as insane. To re-iterate, in the "Zone of happiness" there are an infinite number of miserable people, sharing the word with a finite number of happy people (that finite number is however growing). If this doesn't already sound bad we can put some flesh onto the example to make it less abstract. Lets assume a world where there is a single h...
I find myself wanting to reach for an asymptotic function and mapping most of these infinities back to finite values. I can't quite swallow assigning a non-finite value to infinite lizard. At some point, I'm not paying any more for more lizard no matter how infinite it gets (which probably means I'd need some super-asymptote that continues working even as infinities get progressively more insane).
I'm largely on board with more good things happening to more people is always better, but I think I'd give up the notion of computing utilions by simp...
Pareto: If two worlds (w1 and w2) contain the same people, and w1 is better for an infinite number of them, and at least as good for all of them, then w1 is better than w2.
As far as I can see, the Pareto principle is not just incompatible with the agent-neutrality principle, it's incompatible with set theory itself. (Unless we add an arbitrary ordering relation on the utilities or some other kind of structure.)
Let's take a look at, for instance, vs , where is the multiset containing and ...
I have not read through this in its entirety, but it strikes me that an article I wrote about how the mathematical definition of infinity doesn't match human intuitions might be useful for people to read who are also interested in this material. I'm also fairly new here, so if cross posting this isn't okay, please let me know.
https://london-lowmanstone.medium.com/comparing-infinities-e4a3d66c2b07
The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff.
You can reduce the sensitivity to the spatio-temporal distribution of value at the cost of ranking less of the stuff that's intuitively ambiguous anyway by doing the comparison between two worlds over multiple expansions from a set of possible expansions (e.g. mu...
Excellent post. Thanks for writing this.
"...expansionism violates ... Pareto over agents..."
I don't think this statement makes sense:
Expansionism specifies no mapping between equivalent agents.
Pareto must specify a mapping identifying equivalent pairs of agents.
For a given pair of worlds, expansionism will usually violate Pareto for some mappings and not others - because it must: Pareto gives different answers with different mappings.
[I believe I'm disagreeing with Askell 2018 here; I'm genuinely confused that she seems to be making a simple error - so it'...
(Cross-posted from Hands and Cities)
Summary:
Thanks to Leopold Aschenbrenner, Amanda Askell, Paul Christiano, Katja Grace, Cate Hall, Evan Hubinger, Ketan Ramakrishnan, Carl Shulman, and Hayden Wilkinson for discussion. And thanks to Cate Hall for some poetry suggestions.
I. The importance of the infinite
Most of ethics ignores infinities. They’re confusing. They break stuff. Hopefully, they’re irrelevant. And anyway, finite ethics is hard enough.
Infinite ethics is just ethics without these blinders. And ditching the blinders is good. We have to deal with infinites in practice. And they are deeply revealing in theory.
Why do we have to deal with infinities in practice? Because maybe we can do infinite things.
More specifically, we might be able to influence what happens to an infinite number of “value-bearing locations” – for example, people. This could happen in two ways: causal, or acausal.
The causal way requires funkier science. It’s not that infinite universes are funky: to the contrary, the hypothesis that we share the universe with an infinite number of observers is very live, and various people seem to think it’s the leading cosmology on offer (see footnote).[1] But current science suggests that our causal influence is made finite by things like lightspeed and entropy (though see footnote for some subtlety).[2] So causing infinite stuff probably needs new science. Maybe we learn to make hypercomputers, or baby universes with infinite space-times.[3] Maybe we’re in a simulation housed in a more infinite-causal-influence-friendly universe. Maybe something about wormholes? You know, sci-fi stuff.
The acausal way can get away with more mainstream science. But it requires funkier decision theory. Suppose you’re deciding whether to make a $5000 donation that will save a life, or to spend the money on a vacation with your family. And suppose, per various respectable cosmologies, that the universe is filled with an infinite number of people very much like you, faced with choices very much like yours. If you donate, this is strong evidence that they all donate, too. So evidential decision theory treats your donation as saving an infinite number of lives, and as sacrificing an infinite number of family vacations (does one outweigh the other? on what grounds?). Other non-causal decision theories, like FDT, will do the same. The stakes are high.
Perhaps you say: Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough.
And whatever our credences here, we should be clear-eyed about the fact that helping or harming an infinite number of people would be an extremely big deal. Saving a hundred lives, for example, is a deeply significant act. But saving a thousand lives is even more so; a million, even more so; and so on. For any finite number of lives, though, saving an infinite number would save more than that. So saving an infinite number of lives matters at least as much as saving any finite number – and very plausibly, it matters more (see Beckstead and Thomas (2021) for more).
And the point generalizes: for any way of helping/harming some finite set of people, doing that to an infinite number of people matters at least as much, and plausibly more. And if you’re the type of person who thinks that e.g. saving 10x the lives is 10x as important, it will be quite natural and tempting to say that the infinite version matters infinitely more.
Of course, accepting these sorts of stakes can lead to “fanaticism” about infinities, and neglect of merely finite concerns. I’ll touch on this below. For now, I mostly want to note that, just as you can recognize that humanity’s long-term future matters a lot, without becoming indifferent to the present, so too can you recognize that helping or harming an infinite number of people would matter a lot, without becoming indifferent to the merely finite. Perhaps you do not yet have a theory that justifies this practice; perhaps you’ll never find one. But in the meantime, you need not distort the stakes of infinite benefits and harms, and pretend that infinity is actively smaller than e.g. a trillion.
I emphasize these stakes partly because I’m going to be using the word “infinite” a lot, and casually, with reference to both wonderful and horrifying things. My examples will be math-y and cartoonish. Faced with such a discourse, it can be easy to start numbing out, or treating the topic like a joke, or a puzzle, or a wash of weirdness. But ultimately, we’re talking about situations that would involve actual, live human beings – the same human beings whose lives are at stake in genocides, mental hospitals, slums; human beings who fall in love, who feel the wind on their skin, who care for dying parents as they fade. In infinite ethics, the stakes are just: what they always are. Only: unendingly more.
Here I’m reminded of people who realize, after engaging with the terror and sublimity of very large finite numbers (e.g., Graham’s number), that “infinity,” in their heads, was actually quite small, such that e.g. living for eternity sounds good, but living a Graham’s number of years sounds horrifying (see Tim Urban’s “PS” at the bottom of this post). So it’s worth taking a second to remember just how non-small infinity really is. The stakes it implies are hard to fathom. But they’re crucial to remember – especially given that, in practice, they may be the stakes we face.
Even if you insist on ignoring infinities in practice, though, they still matter in theory. In particular: whatever our actual finitude, ethics shouldn’t fall silent in the face of the infinite. Nor does it. Suppose you were God, choosing whether to create an infinite heaven, or an infinite hell. Flip a coin? Definitely not. Ok then: that’s a data point. Let’s find others. Let’s get some principles. It’s a familiar game – and one we often use merely possible worlds to play.
Except: the infinite version is harder. Instructively so. In particular: it breaks tons of stuff developed for the finite version. Indeed, it can feel staring into a void that swallows all sense-making. It’s painful. But it’s also good. In science, one often hopes to get new data that ruins an established theory. It’s a route to progress: breaking the breakable is often key to fixing it.
Let’s look into the void.
II. On “locations” of value
A quick note on set-up. The standard game in infinite ethics is to put finite utilities on an infinite set (specifically, a countably infinite set) of value-bearing “locations.” But it can make an important difference what sort of “locations” you have in mind.
Here’s a classic example (adapted from Cain (1995); see also here). Consider two worlds:
Zone of suffering: An infinite line of immortal people, numbered starting at 1, who all start out happy (+1). On day 1, person 1 becomes sad (-1), and stays that way forever. On day 2, person 2 becomes sad, and stays that way forever. And so on.
Zone of happiness: Same world, but the happiness and sadness are reversed: everyone starts out sad, and on day 1, person 1 becomes happy; day 2, person 2, and so on.
In zone of suffering, at any given time, the world has finite sadness, and infinite happiness. But any given person is finitely happy, and infinitely sad. In zone of happiness, it’s reversed. Which is better?
My take is that the zone of happiness is better. It’s where I’d rather live, and choosing it fits with principles like “if you can save everyone from infinite suffering and give them infinite happiness instead, do it,” which sound pretty solid. We can talk about analogous principles for “times,” but from a moral perspective, agents seem to me more fundamental.
My broader point, though, is that the choice of “location” matters. I’ll generally focus on “agents.”
III. Problems for totalism
OK, let’s start with easy stuff: namely, problems for a simple, total utilitarian principle that directs you to maximize the total welfare in the universe.
First off: “total welfare in the universe” gets weird in infinite worlds. Consider a world with infinite people at +2 welfare, and an infinite number at -1. What’s the total welfare? It depends on the order you add. If you go: +2, -1, -1, +2, -1, -1, then the total oscillates forever between 0 and 2 (if you prefer to hang out near a different number, just add or subtract the relevant amount at the beginning, then start oscillating). If you go: +2, -1, +2, -1, you get ∞. If you go: +2, -1, -1, -1, +2, -1, -1, -1, you get –∞. So which is it? If you’re God, and you can create this world, should you?
Or consider a world where the welfare levels are: 1, -1/2, 1/3, -1/4, 1/5, and so on. Depending on the order you use, these can sum to any welfare level you want (see the Reimann Rearrangement Theorem; and see the Pasadena Game for decision-theory problems this creates). Isn’t that messed up? Not the type of situation the totalist is used to. (Maybe you don’t like infinitely precise welfare levels. Fine, stick with the previous example.)
Maybe we demand enough structure to fix a definite order (this already involves giving up some cherished principles – more below). But now consider an infinite world where everyone’s at 1. Suppose you can bump everyone up to 2. Shouldn’t you do it? But the “total welfare” is the same: ∞.
So “totals” get funky. But there’s also another problem: namely, that if the total is infinite (whether positive or negative), then finite changes won’t make a difference. So the totalist in an infinite world starts shrugging at genocides. And if they can only ever do finite stuff, they start treating all their possible actions as ethically indifferent. Very bad. As Bostrom puts it:
Strong words. Worrying.
But actually, even if I put a totalist hat on, I’m not too worried. If “how can finite changes matter in infinite worlds?” were the only problem we faced, I’d be inclined to ditch talk about maximizing total welfare, and to focus instead on maximizing the amount of welfare that you add on net. Thus, in a world of infinite 1s, bumping ten people up to 2 adds 10. Nice. Worth it. Size of drop, not size of bucket.[4]
But “for totalists in infinite worlds, are finite genocides still bad?” really, really isn’t the only problem that infinities create.
IV. Infinite fanatics
Another problem I want to note, but then mostly set aside, is fanaticism. Fanaticism, in ethics, means paying extreme costs with certainty, for the sake of tiny probabilities of sufficiently big-deal outcomes.
Thus, to take an infinite case: suppose that you live in a finite world, and everyone is miserable. You are given a one-time opportunity to choose between two buttons. The blue button is guaranteed to transform your world into a giant (but still finite) utopia that will last for trillions of years. The red button has a one-in-a-graham’s-number chance of creating a utopia that will last infinitely long. Which should you press?
Here the fanatic says: red. And naively, if an infinite utopia is infinitely valuable, then expected utility theory agrees: the EV of red is infinite (and positive), and the EV of blue, merely finite. But one might wonder. In particular: red seems like a loser’s game. You can press red over and over for a trillion^trillion years, and you just won’t win. And wasn’t rationality about winning?
This isn’t a purely infinity problem. Verdicts like “red” are surprisingly hard to avoid, even for merely finite outcomes, without saying other very unattractive things (see Beckstead and Thomas (2021) and Wilkinson (2021) for discussion).
Plausibly, though, the infinite version is worse. The finite fanatic, at least, cares about how tiny the probability is, and about the finite costs of rolling the dice. But the infinite fanatic has no need for such details: she pays any finite cost for any probability of an infinite payoff. Suppose that: oops, we overestimated the probability of red paying out by a factor of a graham’s number. Oops: we forgot that red also tortures a zillion kittens with certainty. The infinite fanatic doesn’t even blink. The moment you said “infinity,” she tuned all that stuff out.
Note that varying the “quality” of the infinity (while keeping its sign the same) doesn’t matter either. Suppose that oops: actually, red’s payout is just a single, barely-conscious, slightly-happy lizard, floating for eternity in space. For a sufficiently utilitarian-ish infinite fanatic, it makes no difference. Burn the Utopia. Torture the kittens. I know the probability of creating that lizard is unthinkably negligible. But we have to try.
What’s more, the finite fanatic can reach for excuses that the infinite fanatic cannot. In particular, the finite fanatic can argue that, in her actual situation, she has faces no choices with the relevantly problematic combination of payoffs and probabilities. Whether this argument works is another question (I’m skeptical). But the infinite fanatic can’t even voice it. After all, any non-zero credence on an infinite payoff is enough to bite her. And since it is always possible to get evidence that infinite payoffs are available (God could always appear before you with various multi-colored buttons), non-zero-credences seem mandatory. Thus, no matter where she is, no matter what she has seen, the infinite fanatic never gives finite things any intrinsic attention. When she kisses her children, or prevents a genocide, she does it for the lizard, or for something at least as large.
(This “non-zero credences on infinities” issue is also a problem for assigning expected sizes to empirical quantities. What’s your expected lifespan? Oops: it’s infinite. How long will humanity survive, in expectation? Oops: eternity. How tall, in expectation, is that tree? Oops: infinity tall. I guess we’ll just ignore this? Yep, I guess we will.)
But infinite fanaticism isn’t our biggest infinity problem either. Notably, for example, it seems structurally similar to finite fanaticism, and one expects a similar diagnosis. But also: it’s a type of bullet a certain sort of person has gotten used to biting (more below). And biting has a familiar logic: as I noted above, infinities really are quite a big-deal thing. Maybe we can live with obsession? There’s a grand tradition, for example, of treating God, heaven, hell, etc as lexically more important than the ephemera of this fallen world. And what is heaven but a gussied-up lizard? (Well, one hopes for distinctions.)
No, the biggest infinity problems are harder. They break our familiar logic. They serve up bullets no one dreamed of biting. They leave the “I’ll just be hardcore about it” train without tracks.[5]
V. The impossibility of what we want
In particular: whether you’re obsessed with infinities or not, you need to be able to choose between them. Notably, for example, you might (non-zero credences!) run into a situation where you need to create one infinite baby universe (hypercomputer, etc), vs. another. And as I noted above, we have views about this. Heaven > hell. Infinite utopia > infinite lizard (at least according to me).
And even absent baby-universe stuff, EDT-ish folks (and people with non-trivial credence on EDT-ish decision-theories) with mainstream credences on infinite cosmologies are already choosing between infinite worlds – and even, infinite differences between worlds – all the time. Whenever an EDT-ish person moves their arm, they see (with very substantive probability) an infinite number of arms, all across the universe, moving too. Every donation is an infinite donation. Every papercut is an infinity of pain. Yet: whatever your cosmology and decision theory, isn’t a life-saving donation worth a papercut? Aren’t two life-saving donations better than one?
Ok, then, let’s figure out the principles at work. And let’s start easy, with what’s called an “ordinal” ranking of infinite worlds: that is, a ranking that says which worlds are better than which others, but which doesn’t say how much better.
Suppose we want to endorse the following extremely plausible principle:
Pareto looks super solid. Basically it just says: if you can help an infinite number of people, without hurting anyone, do it. Sign me up.
But now we hit problems. Consider another very attractive principle:
By “welfare-preserving bijection,” I mean a mapping that pairs each agent in w1 with a single agent in w2, and each agent in w2 with a single agent in w1, such that both members of each pair have the same welfare level. The intuitive idea here is that we don’t have weird biases that make us care more about some agents than others for no good reason. A world with a hundred Alices, each at 1, has the same value as a world of hundred Bobs, each at 1. And a world where Alice has 1, and Bob has 2, has the same value as a world where Alice has 2, and Bob has 1. We want the agents in a world to flourish; but we don’t care extra about e.g. Bob flourishing in particular. Once you’ve told me the welfare levels in a given world, I don’t need to check the names.
(Maybe you say: what if Alice and Bob differ in some intuitively relevant respect? Like maybe Bob has been a bad boy and deserves to suffer? Following common practice, I’m ignoring stuff like this. If you like, feel free to add further conditions like “provided that everyone is similar in XYZ respects.”)
The problem is that in infinite worlds, Pareto and Agent-Neutrality contradict each other. Consider the following example (adapted from Van Liedekerke (1995)). In w1, every fourth agent has a good life. In w2, every second agent has a good life. And the same agents exist in both worlds.
By Pareto, w2 is better than w1 (it’s better for a3, a7, and so on, and just as good for everyone else). But there is also a welfare-preserving bijection from w1 to w2: you just map the 1s in w1 to the 1s in w2, in order, and the same for the 0s. Thus: a1 goes to a1, a2 goes to a2, a3 goes to a4, a4 goes to a6, a5 goes to a3, and so on. So by Agent-Neutrality, w1 and w2 are equally good. Contradiction.
Here’s another example (adapted from Hamkins and Montero (1999)). Consider an infinite world where each agent is assigned to an integer, which determines their well-being, such that each agent i is at i welfare. And now suppose you could give each agent in this world +1 welfare. Should you do it? By Pareto, yes. But wait: have you actually improved anything? By Agent-Neutrality: no. There’s a welfare preserving bijection from each agent i in the first world to agent i-1 in the second:
Indeed, Agent-neutrality mandates indifference to the addition or subtraction of any uniform level of well-being in w3. You could harm each agent by a million, or help them by a zillion, and Agent-neutrality will shrug: it’s the same distribution, dude.
Clearly, then, either Pareto or Agent-Neutrality has got to go. Which is it?
My impression is that ditching Agent-Neutrality is the more popular option. One argument for this is that Pareto just seems so right. If we’re not in favor of helping an infinite number of agents, or against harming an infinite number, then where on earth has our ethics landed us?
Plus: Agent-Neutrality causes problems for other attractive, not-quite-Pareto principles as well. Consider:
Seems right. Very right, in fact. But now consider an infinite world where everyone is at -1. And suppose you can add another infinity of people at -1.
Agent-neutrality is like: shrug, it’s the same distribution. But I feel like: tell that to the infinity of distinct suffering people you just created, dude. If there is a button on the wall that says “create an extra infinity of suffering people, once per second,” one does not lean casually against it, regardless of whether it’s already been pressed.
On the other hand, when I step back and look at these cases, my agent-neutrality intuitions kick in pretty hard. That is, pairs like w3 and w4, and w5 and w6, really start to look like the same distribution.
Here’s a way of pumping the intuition. Consider a world just like w3/w4, except with an entirely different set of people (call them the “b-people”).
Compared to w3, w7 really looks equally good: switching from a-people to b-people doesn’t change the value. But so, too, does w7 look equally good when compared to w4 (it doesn’t matter which b-person we call b0). But by Pareto, it can’t be both.
We can pump the same sort of intuition with w5, w6, and another infinite b-people world consisting of all -1s (call this w8). I feel disinclined to pay to move from w5 to w8: it’s just another infinite line of -1s. But I feel the same about w6 and w8. Yet I am very into paying to prevent the addition of an extra infinity of suffering people to a world. What gives?
What’s more, my understanding is that the default way to hold onto Pareto, in this sort of case, is to say that w7 is “incomparable” to w3 and w4 (e.g., it’s neither better, nor worse, nor equally good), even though w3 and w4 are comparable to each other. There’s a big literature on incomparability in philosophy, which I haven’t really engaged with. One immediate problem, though, has to do with money-pumps.
Suppose that I’m God, about to create w3. Someone offers me w4 instead, for $1, and I’m like: hell yeah, +1 to an infinite number of people. Now someone offers me w7 in exchange for w4. They’re incomparable, so I’m like … um, I think the thing people say here is that I’m “rationally permitted” to either trade or not? Ok, f*** it, let’s trade. Now someone else says: wait, how about w3 for w8? Another “whatever” choice: so again I shrug, and trade. But now I’m back to where I started, except with $1 less. Not good. Money-pumped.
Fans of incomparability will presumably have a lot to say about this kind of case. For now I’ll simply register a certain kind of “bleh, whatever we end up saying here is going to kind of suck” feeling. (For example: if in order to avoid money-pumping, the incomparabilist forces me to “complete” my preferences in a particular way once I make certain trades, such that I end up treating w7 as equal either to w3 or w4, but not both, I feel like: which one? Either choice seems arbitrary, and I don’t actually think that w7 is better/worse than one of w3 or w4. Why am I acting like I do?)
Overall, this looks like a bad situation to me. We have to start shrugging at infinities of benefit or harm, or we have to start being opinionated/weird about worlds that really look the same. I don’t like it at all.
And note that we can run analogous arguments for basic locations of value other than agents. Suppose, for example, that we replace each of the “agents” in the worlds above with spatio-temporal regions. We can then derive similar contradictions between e.g. “spatio-temporal Pareto” (if you make some spatio-temporal regions better, and none worse, that’s an improvement), and “spatio-temporal-neutrality” (e.g., it doesn’t matter in which spatio-temporal region a given unit of value occurs, as long as there’s a value-preserving bijection between them). And the same goes for person-moments, generations, and so forth.
This contradiction between something-Pareto and something-Neutrality is one relatively simple impossibility result in infinite ethics. The literature, though, contains a variety of others (see e.g. Zame (2007), Lauwers (2010), and Askell (2018)). I haven’t dug in on these much, but at a glance, they seem broadly similar in flavor.
And note that we can get contradictions between something-Pareto and something-else-Pareto as well: for example, Pareto over agents and Pareto over spatio-temporal locations. Thus, consider a single room where Alice will live, then Bob, then Cindy, and so forth, onwards for eternity. In w9, each of them lives for 100 happy years. In w10, each lives for 1000 slightly less happy years, such that each life is better overall. w10 is better for every agent. But w9 is better at every time (this example is adapted from Arntzenius (2014)). So which is better overall? Here, following my verdict about the zone of happiness, I’m inclined to go with w10: agents, I think, are the more fundamental unit of ethical concern. But one might’ve thought that making an infinite number of spatio-temporal locations worse would make the world worse, not better.
Pretty clearly, some stuff we liked from finite land is going to have to go.
VI. Ordinal rankings aren’t enough
Suppose we bite the bullet and ditch Pareto or Agent-Neutrality. We’re still nowhere close to generating an ordinal ranking over infinite worlds. Pareto, after all, is an extremely weak principle: it stops applying as soon a given world is better for one agent, and worse for another (for example, donations vs. papercuts). And Agent-Neutrality stops applying without a welfare-preserving bijection. So even with a nasty bullet fresh in our teeth, a lot more work is in store.
Worse, though, ordinal rankings aren’t enough. They tell you how to choose between certainties of one outcome vs. another. But real choices afford no such certainty. Rather, we need to choose between probabilities of creating one outcome vs. another. Suppose, for example, that God offers you the following lotteries:
Which should you choose? Umm…
The classic thing to want here is some kind of “score” for each world, such that you can multiply this score by the probabilities at stake to get an expected value. But we’ll settle for principles that will just tell us how to choose between lotteries more generally.
Here I’ll look at a few candidates for principles like this. This isn’t an exhaustive survey; but my hope is that it can give a flavor for the challenge.
VII. Something about averages?
Could we say something about averages? Like <2, 2, 2, 2, …> is better than <1, 1, 1, 1, …>, right? So maybe we could base the value of an infinite world on something like the limit of (total welfare of the agents counted so far)/(number of agents counted so far). Thus, the 2s have a limiting average of 2; and the 1s, a limiting average of 1; etc.
This approach suffers from a myriad of problems. Here’s a sample:
One solution to order-dependence is to appeal to the limit of the utility per unit space-time volume, as you expand outward from some (all?) points. I cover principles with this flavor below. For now I’ll just note that many of the other problems I just listed will persist.
VIII. New ways of representing infinite quantities?
Could we look for new ways of representing infinite quantities?
Bostrom (2011) suggests mapping infinite worlds (or more specifically: the sums of the utilities in an infinite sequence of value-bearing things) to “hyperreal numbers.” I won’t try to explain this proposal in full here (and I haven’t tried to understand it fully), but I’ll note one of the major problems: namely, that it’s sensitive to an arbitrary choice of “ultra-filter,” such that:
And once you’ve arbitrarily chosen your ultra-filter, Bostrom’s proposal is order-dependent as well. E.g., once you’ve decided that <1, -2, 1, 1, -2, 1, 1 …> is e.g. better than (or worse than, or equal to) an empty world, we can just re-arrange the terms to change your mind.
(Arntzenius also complains that Bostrom’s proposal gets him dutched booked. At a glance, though, this looks to me like an instance of the broader set of worries about “Satan’s Apple” type cases (see Arntzenius, Elga and Hawthorne (2004)), which I don’t feel very worried about.)
IX. Something about expanding regions of space-time?
Let’s turn to a more popular approach (e.g., an approach that has multiple adherents): one focused on the utility contained inside expanding bubbles of space-time.
Vallentyne and Kagan (1997) suggest that if we have two worlds with the same locations, and these locations have an “essential natural order,” we look at the differences between the utility contained in a “bounded uniform expansion” from any given location. In particular: if there is some positive number k such that, for any bounded uniform expansion, the utility inside the expansion eventually stays larger by more than k in worldi vs. worldj, then worldi is better.
Thus, for example, in a comparison of <1, 1, 1, 1, …> vs. <2, 2, 2, 2, …>, the utility inside any expansion is bigger in the 2 world. And similarly, in <1, 2, 3, 4 …> vs. <2, 3, 4, 5>, expansions in the latter will always be greater by 1.
“Essential natural order” is a bit tricky to define, but the key upshot, as I understand it, is that things like agents and person-moments don’t have it (agents can be listed by their height, by their passion for Voltaire, etc), but space-timey-stuff plausibly does (there is a well-defined notion of a “bounded-region of space-time,” and we can make sense of the idea that in order to get from a to b, you have to “go through” c). Exactly what counts as a “uniform expansion” also gets a bit tricky (see Arntzenius (2014) for discussion), but one gets the broad vibe: e.g., if I’ve got a growing bubble of space-time, it should be growing at the same rate in all directions (some of trickiness comes from comparing “directions,” I think).
A major problem for Vallentyne and Kagan (1997) is that their principle only provides an ordinal ranking. But Arntzenius suggests a modification that generalizes to choices amongst lotteries: instead of looking at the actual value at each location, look at the expected value. Thus, if you’re choosing between:
Then you’d use the expected values of the locations “make these lotteries into worlds.” E.g., l3 is equivalent to <1, 1.5, 2, 2.5 …>, and l4 is equivalent to <0, 2, 4, 8 …>; and the latter is better according to Vallentyne-Kagan, so Arntzenius says to choose it. Granted, this approach doesn’t give worlds cardinal scores to use in EV maximization; but hey, at least we can say something about lotteries.
The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff.
Consider an infinite line of planets, each of which houses a Utopia, and none of which will ever interact with any of the others. On expansionism, it is extremely good to pull all these planets an inch closer together: so good, indeed, as to justify any finite addition of dystopias to the world (thanks to Amanda Askell, Hayden Wilkinson, and Ketan Ramakrishnan for discussion). After all, pulling on the planets so that there’s an extra Utopia every x inches will be enough for the eventual betterness of the uniform expansions to compensate for any finite number of hellscapes. But this looks pretty wrong to me. No one’s thanking you for pulling those planets closer together. In fact, no one noticed. But a lot of people are pissed about the whole “adding arbitrarily large (finite) numbers of hellscapes” thing: in particular, the people living there.
For closely related reasons, expansionism violates both Pareto over agents and Agent-neutrality. Consider the following example from Askell (2018), p. 83, in which three infinite sets of people (x-people, y-people, and z-people) live on an infinite sequence of islands, which are either “Balmy” (such that three out of four agents are happy) or “Blustery” (such that three out of four agents are sad). Happy agents are represented in black, and sad agents in white.
From Askell (2018), p. 83; reprinted with permission
Here, expansionism likes Balmy more than Blustery – and intuitively, we might agree. But Blustery is better for the y-people, and worse for no one: hence, goodbye Pareto. And there is a welfare-preserving bijection from Balmy to Blustery as well. So goodbye Agent-Neutrality, too. Can’t we at least have one?
The basic issue, here, is that expansionism’s moral focus is on space-time points (regions, whatever), rather than people, person-moments, and so on. In some cases (e.g. Balmy vs. Blustery), this actually does fit with our intuitions: we like it if the universe seems “dense” with value. But abstractly, it’s pretty alien; and when I reflect on questions like “how much do I want to pay to pull these planets closer together?”, the appeal from intuition starts to wane.
My other big issue with expansionism, at present, is that it fails to provide guidance in lots of cases. Some milder problems are sort of exotic and specific. Thus:
These are all cases in which the worlds being compared have the exact same locations. I expect bigger problems, though, with worlds that aren’t like that. Consider, for example, the choice between creating a spatially-finite world with an immortal dude trudging from hell to heaven, where each day looks like <…-2, -1, 0, 1, 2 …>, and a spatially-infinite universe that only lasts a day, with a infinite line of people whose days are <…-2, -1, 0, 1, 2 …>. How shall we match up the locations in these worlds? Depending on how we do it, we’ll get different expansionist verdicts. And we’ll hit even worse arbitrariness if we try to e.g. match up locations for worlds with different numbers of dimensions (e.g., pairing locations in a 2-d world with locations in a 4-d one), let alone worlds whose differences reflect the full range of logically-possible space-times.
Maybe you say: whatever, we’ll just go incomparable there. But note this incomparability infects our lotteries as well. Thus, for example, suppose that we get some space-times, A and B, that just can’t be matched up with each other in any reasonable and/or non-arbitrary way. And now suppose that I’m choosing between lotteries like:
The problem is because these worlds can’t be matched up, we can’t turn these lotteries into single worlds we can compare with our expansionist paradigm. So even though it looks kind of plausible that we want l6 here, we can’t actually run the argument.
Maybe you say: Joe, this won’t happen often in practice (this is the vibe one gets from Arntzenius (2014) and Wilkinson (2021)). But I feel like: yes it will? We should already have non-zero credence on our living in different space-times that can’t be matched up, and it doesn’t matter how small the probability on the B-world is in the case above. What’s more, we should have non-zero credence that later, we’ll be able to create all sorts of crazy infinite baby-universes – including ones where their causal relationship to our universe doesn’t support a privileged mapping between their locations.
There are other possible expansionist-ish approaches to lotteries (see e.g. Wilkinson (2020)). But I expect them – and indeed, any approach that requires counterpart relations between spatio-temporal locations — to run into similar problems.
X. Weight people by simplicity?
Here’s an approach I’ve heard floating around amongst Bay Area folks, but which I can’t find written up anywhere (see here, though, for some similar vibes; and the literature on UDASSA for a closely-related anthropic view that I think some people use, perhaps together with updateless-ish decision theory, to reach similar conclusions). Let’s call it “simplicity weighted utilitarianism” (I’ve also heard “k-weighted,” for “Kolmogorov Complexity”). The basic idea, as I understand it, is to be a total utilitarian, but to weight locations in a world by how easily they can be specified by an arbitrarily-chosen Universal Turing Machine (see my post on the Universal Distribution for more on moves in this vicinity). The hope here is to do for people’s moral weight what UDASSA does for your prior over being a given person in an infinite world: namely, give an infinite set of people weights that sum to 1 (or less).
Thus, for example, suppose that I have an infinite line of rooms, each with numbers written in binary on the door, starting at 0. And let’s say we use simplicity-discounts that go in proportion to 1/(2^(numbers of bits for the door number+1)). Room 0 gets a 1/4 weighting, room 1 gets 1/4, room 10 gets 1/8, room 11 gets 1/8, room 100 gets 1/16th, and so on. (See here for more on this sort of set-up.) The hope here is that if you fill the rooms with e.g. infinite 1s, you still get a finite total (in this case, 1). So you’ve got a nice cardinal score for infinite worlds, and you’re not obsessing about them.
Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.
Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.
Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It feels like denying the hypothetical, rather than handling it. And are we really so confident about how much of what can be fit inside an “experience”?
Regardless, though, this view has other problems as well. Notably: like expansionism, this approach will also pay lots to re-arrange people, pull them closer together, etc (for example, moving from a “one person every million rooms” world to a “one person every room” world). But worse than expansionism, it will do this even in finite worlds. Thus, for example, it cares a lot about moving the happy people in rooms 100-103 to rooms 0-3, even if only four people exist.
Indeed, it’s willing to create infinite suffering for the sake of this trade. Thus, a world where the first four rooms are at 1 is worth 1/4 + 1/4 + 1/8 + 1/8 = 3/4. But if we fill the rest of the rooms with an infinite line of -1, we only take a -1/4 hit. Indeed, on this view, just the first room at 1 offsets an infinity of suffering in rooms four and up.
Maybe you say: “Joe, my discounts aren’t going to be so steep.” But it’s not clear to me how to tell which discounts are at stake, for a given UTM. And anyway, regardless of your discounts, the same arguments will hold, but with a different quantitative gloss.
Looks bad to me.
XI. What’s the most bullet-biting hedonistic utilitarian response we can think of?
As a final sample from the space of possible views, let’s consider the view that seems to me most continuous with the spirit of hardcore, bullet-biting hedonistic utilitarianism. (I’m not aware of anyone who endorses the view I’ll lay out, but Bostrom (2011, p. 29)’s “Extended Decision Rule” is in a similar ballpark). This view doesn’t care about people, or space-time points, or densities of utility per unit volume, or Pareto, or whatever. All it cares about is the amount of pleasure vs. pain in the universe. Pursuant to this single-minded focus, it groups worlds into four types:
This view’s decision procedure is just: maximize the probability of positive infinity minus the probability of negative infinity (call this quantity “the diff”). Maybe it allows finite worlds to serve as tie-breakers, but this doesn’t really come up in practice: in practice, it’s obsessed with maximizing the diff (see Bostrom (2011), p. 30-31). And it doesn’t have anything to say about comparisons between different mixed infinity worlds, or about trade-offs between mixed infinities and finite worlds.
Alternatively, if we don’t like all this faff about incomparability (my model of a bullet-biting utilitarian doesn’t), we can set the value of all mixed infinity worlds to 0 (i.e., the positive and negative infinities “cancel out”). Then we’d have a nice ranking with positive infinity infinitely far on the top, finite worlds in between (with mixed infinities sitting at zero), and negative infinities infinitely far at the bottom.
Call this the “four types” view. To get a sense of this view’s verdicts, consider the following worlds:
On the four types view:
We can see the four types view as continuous with a certain kind of “pleasure/pain-neutrality” principle. That is, if we assume that pleasure/pain come in units you can either “swap around” or render equivalent to each other (e.g., there is some amount of lizard time that outweighs a moment in heaven; some number of dust specks that outweigh a moment in hell, etc – a classic utilitarian thought), then in some sense you can build every positive infinity world (or the equivalent) by re-arranging Infinite Lizard, every negative infinity world by re-arranging Infinite Speck, and every type 3 world by re-arranging both in combination. It’s the same (quality-weighted) amount of pleasure and pain regardless, says this view, and amounts of pleasure and pain (as opposed to “densities,” or placements in different people’s lives, or whatever) were what utilitarianism was supposed to be all about.
There is, I think, a certain logic to it. But also: it’s horrifying. Trading a world where an infinite number of people have infinitely good lives, for a ~guarantee of a world where infinitely many people are eternally tortured, to get a one-in-a-graham’s-number chance of creating a single immortal, barely-conscious lizard? Fuuuuhck that. That’s way worse than paying to pull planets together, or not knowing what to say about worlds with non-matching space-times. It’s worse than the repugnant conclusion; worse than fanaticism; worse than … basically every bullet some philosopher has ever bitten? If this is where “bullet-biting utilitarianism” leads, it has entered a whole new phase of crazy. Just say no, people. Just say no.
But also: such a choice doesn’t really make sense on its own terms. Infinite Lizard is getting treated as lexically better than Heaven + Speck, because it’s possible to map all of Infinite Lizard’s barely conscious happiness onto something equivalent to all the happiness in Heaven+Speck, with the negative infinity of the dust specks left over. But so, equally, is it possible to map all of Infinite Lizard’s barely-conscious happiness onto everyone’s first nano-seconds in heaven, to map those nano-seconds onto each of their dust specks in a way that would more than outweigh the dust-specks in finite contexts, and to leave everyone with an infinity of fully-conscious happiness left over. That is, the “Infinite Lizard Has All of Heaven’s Happiness” and “No Amount Of Time In Heaven Can Outweigh The Dust Specks” mappings aren’t, actually, privileged here: one just as easily interpret Heaven + Speck as ridiculously better than Infinite Lizard (indeed, this is my default stance). But the four types view has fixated on these particular mappings anyway, and condemned an infinity of people to eternal torture for their sake.
(Alternatively, on yet a third version of the four-types view, we can try to take the arbitrariness of these mappings more seriously, and say that all mixed worlds are incomparable to everything, including positive and negative infinities. This avoids mandating trades from Heaven + Speck to Hell + Lollypop for a tiny chance of the lizard (such a choice is now merely “permissible”), but it also makes an even larger set of choices rationally permissible: for example, choosing Hell + Lollypop over pure Heaven. And it permits money-pumps that lead you from Heaven, to Hell + Lollypop, and then to Hell.)
XII. Bigger infinities and other exotica
OK, we’ve now touched on five possible approaches to infinite ethics: averages, hyperreals, expansionism, simplicity weightings, and the four types view. There are others in the literature, too (see e.g. Wilkinson (2020) and Easwaran (2021) – though I believe that both of these proposals require that the two worlds have exactly the same locations (maybe Wilkinson’s can be rejiggered to avoid this?) – and Jonsson and Voorneveld (2018), which I haven’t really looked at). I also want to note, though, ways in which the discussion of all of these has been focused on a very narrow range of cases.
In particular: we’ve only ever been talking about the smallest possible infinities – i.e., “countable infinities.” This is the size of the set of the natural numbers (and the rationals, and the odd numbers, and so on), and it makes it possible to do things like list all the locations in some order. But there is an unending hierarchy of larger infinities, too, create-able by taking power-sets over and over forever (see Cantor’s theorem). Indeed, according to this video, some people even want to posit a size of infinity inaccessible via power-setting – an infinity whose role, with respect to taking power-sets, is analogous to the role of countable infinities, with respect to counting (i.e., you never get there). And some go beyond that, too: the video also contains the following diagram (see also here), which starts with the “can’t get there via power-setting” infinity at the bottom (“inaccessible”), and goes from there (centrally, according to the video, by just adding axioms declaring that you can).
(From here.)
I’m not a mathematician (as I expect this post has already made clear in various places), but at a glance, this looks pretty wild. “Almost huge?” “Superhuge?” Also, not sure where this fits with respect to the diagram, but Cantor was apparently into the idea of the “Absolute Infinite,” which I think is supposed to be just straight up bigger than everything period, and which Cantor “linked to the idea of God.”
Now, relative to countably infinite worlds, it’s quite a bit harder to imagine worlds with e.g. one person for every real number. And imagining worlds with a “strongly Ramsey” number of people seems likely to be a total non-starter, even if one knew what “strongly Ramsey” meant, which I don’t. Still, it seems like the infinite fanatic should be freaking out (drooling?). After all, what’s the use obsessing about the smallest possible infinities? What happened to scope-sensitivity? Maybe you can’t imagine bigger-infinity worlds; maybe the stuff on that chart is totally confused – but remember that thing about non-zero credences? The lizards could be so much larger, man. We have to try for an n-huge lizard at least. And really (wasn’t it obvious the whole time?), we should be trying to create God. (A friend comments, something like: “God seems too comprehensible, here. N-huge lizards seem bigger.”)
More importantly, though: whether we’re obsessing about infinities or not, it seems very likely that trying to incorporate merely uncountable infinities (let alone “supercompact” ones, or whatever) into our lotteries is going to break whatever ethical principles we worked so hard to construct for the countably infinite case. In this sense, focusing purely on countable infinities seems like a recipe for the same kind of rude awakening that countable infinities give to finite ethics. Perhaps we should try early to get hip to the pattern.
And we can imagine other exotica breaking our theories as well. Thus, for example, very few theories are equipped to handle worlds with infinite value at a single “location.” And expansionism relies on all the worlds we’re considering having something like a “space-time” (or at least, a “natural ordering” of locations). But do space-timey worlds, or worlds with any natural orderings of “locations,” exhaust the worlds of moral concern? I’m not sure. Admittedly, I have a tough time imagining persons, experience-like things, or other valuable stuff existing without something akin to space-time; but I haven’t spent much time on the project, and I have non-zero credence that if I spent more, I’d come up with something.
XIII. Maybe infinities are just not a thing?
But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a 30% chance that probabilities need not add up to 100%? What percent of your net worth should you pay for such a lottery, vs. a guaranteed avocado sandwich? Must you learn to answer, lest your ethics break, both in theory and in practice?
One feels like: no. Indeed, one senses that a certain type of plot has been lost, and that we should look for less demanding standards for our lottery-choosing – ones that need not accommodate literally every wacked-out, probably-non-sensical possibility we haven’t thought of yet.
With this in mind, though, perhaps one is tempted to give a similar response to countable infinities as well. “Look, dude, just like my ethics doesn’t need to be able to handle ‘the universe is a square circle,’ it doesn’t need to be able to handle infinite worlds, either.”
But this dismissal seems too quick. Infinite worlds seem eminently possible. Indeed, we have very credible scientific theories that say that our actual universe contains a countably infinite number of people, credible decision theories that say that we can have infinite influence on that universe, widely-accepted religions that posit infinite rewards and punishments, and a possibly very intense future ahead of us where baby-universes/wormholes/hyper-computers etc appear much more credible, at least, than “consciousness = cheesy-bread.” What’s more, we have standard ethical theories that break quickly on encounter with readily-imaginable cases that we continue to have strong ethical intuitions about (e.g., Heaven + Speck vs. Hell + Lollypop). For these reasons, it seems to me that we have much more substantive need to deal with countable infinities in our ethics than we do with square-circle universes.
Still, my impression is that a relatively common response to infinite ethics is just: “maybe somehow infinities actually aren’t a thing? For example: they’re confusing, and they lead to weird paradoxes, like building the sun out of a pea (video), and messed up stuff with balls in boxes (video). Also: I don’t like some of these infinite ethics problems you’re talking about” (see here for some more considerations). And indeed, despite their role in e.g. cosmology (let alone the rest of math), some philosophers of math (e.g., “ultrafinitists”) deny the existence of infinities. Naively, this sort of position gets into trouble with claims like “there is a largest natural number” (a friend’s reaction: “what about that number plus one?”), but apparently there is ultra-finitist work trying to address this (something about “indefinitely large numbers”? hmm…).
My own take, though, is that resting the viability of your ethics on something like “infinities aren’t a thing” is a dicey game indeed, especially given that modern cosmology says that our actual concrete universe is very plausibly infinite. And as Bostrom (2011, p. 38) notes, conditioning on the non-thing-ness of infinities (or ignoring infinity-involving possibilities) leads to weird behavior in other contexts – e.g., refusing to fund scientific projects premised on infinity-involving hypotheses, insisting that the universe is actually finite even as more evidence comes in, etc. And more broadly, it just looks like denial. It looks like covering your ears and says “la-la-la.”
XIV. The death of a utilitarian dream
The broad vibe I’m trying to convey, here, is that infinite ethics is a rough time. Even beyond “torturing any finite number of people for any probability of an infinite lizard,” we’ve got bad impossibility results even just for ordinal rankings; we’ve got a smattering of theories that are variously incomplete, order-dependent, Pareto-violating, and otherwise unattractive/horrifying; and we’ve got an infinite hierarchy of further infinities, waiting in the wings to break whatever theory we happen to settle on. It’s early days (there isn’t that much work on this topic, at least in analytic ethics), but things are looking bleak.
OK, but: why does this matter? I’ll mention a few reasons.
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
Indeed, even people who reject this dream can feel its allure. If you’re a deontologist, scrambling to add yet another epicycle to your already-complex and non-exhaustive principles, to handle yet another counter-example (e.g. the fat man lives in a heavy metal crate, such that his body itself won’t stop the trolley, but he’ll die if the crate moves), you might hear, sometimes, a still, small voice saying: “You know, the utilitarians don’t have this kind of problem. They’ve got a nice, simple, coherent theory that takes care of this case and a zillion others in one fell swoop, including all possible lotteries (something my deontologist friends barely ever talk about). And they always get more expected net pleasure in return. They sure have it easy…”[6] In this sense, “maximize expected net pleasure” can hover in the background as a kind of “default.” Maybe you don’t go for it. But it’s there, beckoning, and making a certain kind of sense. You could always fall back on it. Perhaps, indeed, you can feel it relentlessly pulling on you. Perhaps a part of you fears the force of its simplicity and coherence. Perhaps a part of you suspects that ultimately (horribly?), it’s the way to go.
But I think infinite ethics changes this picture. As I mentioned above: in the land of the infinite, the bullet-biting utilitarian train runs out of track. You have to get out and wander blindly. The issue isn’t that you’ve become fanatical about infinities: that’s a bullet, like the others, that you’re willing to bite. The issue is that once you’ve resolved to be 100% obsessed with infinities, you don’t know how to do it. Your old thing (e.g., “just sum up the pleasure vs. pain”) doesn’t make sense in infinite contexts, so your old trick – just biting whatever bullets your old thing says to bite – doesn’t work (or it leads to horrific bullets, like trading Heaven + Speck for Hell + Lollypop, plus a tiny chance of the lizard). And when you start trying to craft a new version of your old thing, you run headlong into Pareto-violations, incompleteness, order-dependence, spatio-temporal sensitivities, appeals to persons as fundamental units of concern, and the rest. In this sense, you start having problems you thought you transcended – problems like the problems the other people had. You start having to rebuild yourself on new and jankier foundations. You start writing whole papers about a few counterexamples, using principles that you know don’t cover all the choices you might need to make, even as you sense the presence of further problems and counterexamples just offscreen. Your world starts looking stranger, “patchier,” more complicated. You start to feel, for the first time, genuinely lost.
To be clear: I’m not saying that infinite ethics is hopeless. To the contrary, I think some theories are better than others (expansionism is probably my current favorite), and that further work on the topic is likely to lead to further clarity about the best overall response. My point is just that this response isn’t going to look like the simple, complete, neutrality-respecting, totalist, hedonistic, EV-maximizing utilitarianism that some hoped, back in the day, would answer every ethical question – and which it is possible to treat as a certain kind of “fallback” or “default.” Maybe the best view will look a lot like such a utilitarianism in finite contexts – or maybe it won’t. But regardless, a certain type of dream will have died. And the fact that it dies eventually should make it less appealing now.
XV. Everyone’s problem
That said, infinite ethics is a problem for everyone, not just utilitarians. Everyone (even the virtue ethicists) needs to know how to choose between Heaven + Speck vs. Hell + Lollypop, given the opportunity. Everyone needs decision procedures that can handle some probability of doing infinite things. Faced with impossibility results, everyone has to give something up. And sometimes that stuff you give up matters in finite contexts, too.
A salient example to me, here, is spatio-temporal neutrality. Utilitarian or no, most philosophers want to deny that a person’s location in space and time has intrinsic ethical significance. Indeed, claims in this vicinity play an important role in standard arguments against discounting the welfare of future people, and in support of “longtermism” more broadly (e.g., “location in time doesn’t matter, there could be a lot of people in the future, so the future matters a ton”). But notably, various prominent views in infinite ethics (notably, expansionist views; but also “simplicity-weightings”) reject spatio-temporal neutrality. On these views, locations in space and time matter a lot – enough, indeed, to make e.g. pulling infinite happy planets an inch closer together worth any finite amount of additional suffering. On its own, this isn’t enough to get conclusions like “people matter more if they’re nearer to me in space and time” (the thing that longtermism most needs to reject) – but it’s an interesting departure from “location in spacetime is nothing to me,” and one that, if accepted, might make us question other neutrality-flavored intuitions as well.
And the logic that leads to non-neutrality about space-time is understandable. In particular: infinite worlds look and behave very differently depending on how you order their “value-bearing locations,” so if your view focuses on a type of location that lacks a natural order (e.g., agents, experiences, etc), it often ends up indeterminate, incomplete, and/or in violation of Pareto for the locations in question. Space-time, by contrast, comes with a natural order, so focusing on it cuts down on arbitrariness, and gives us more structure to work with.
Something somewhat analogous happens, I think, with “persons” vs. “experiences” as units of concern. Some people (especially, in my experience, utilitarian-types) are tempted, in finite contexts, to treat experiences (or “person-moments”) as more fundamental, since persons can give rise to various Parfitian problems. But in infinite contexts, refusing to talk about persons makes it much harder to do things like distinguish between worlds like Heaven + Speck vs. Hell + Lollypop, where our intuition is centrally driven, I think, by thoughts like “In Heaven + Speck, everyone’s life is infinitely good; in Hell + Lollypop, everyone’s life is infinitely bad.” So it becomes tempting to bring persons back into the picture (see Askell (2018), p. 198, for more on this).
We can see the outlines of a broader pattern. Finite ethics (or at least, a certain reductionist kind) often tries to ignore structure. It calls more and more things (e.g., the location of people in space-time, the locations of experiences in lives) irrelevant, so that it can hone in on the true, fundamental unit of ethical concern. But infinite ethics needs structure, or else everything dissolves into re-arrangeable nonsense. So it often starts adding back in what finite ethics threw out. One is left with a sense that perhaps, there is even more structure to be not-ignored. Perhaps, indeed, the game of deriving the value of the whole from the value of some privileged type of part is worse than one might’ve thought (see Chappel (2011) for some considerations, h/t Carl Shulman). Perhaps the whole is primary.
These are a few examples of finite-ethical impulses that infinities put pressure on. I expect there to be many others. Indeed, I think it’s good practice, in finite ethics, to make a habit of checking whether a given proposal breaks immediately upon encounter with the infinite. That doesn’t necessarily mean you need to throw it out. But it’s a clue about its scope and fundamentality.
XVI. Nihilism and responsibility
Perhaps one looks at infinite ethics and says: this is an argument for nihilism. In particular: perhaps one was up for some sort of meta-ethical realism, if the objectively true ethics was going to have certain properties that infinite ethics threatens to deny – properties like making a certain sort of intuitively resonant sense. Perhaps, indeed, one had (consciously or unconsciously) tied one’s meta-ethical realism to the viability of a certain specific normative ethical theory – for example, total hedonistic utilitarianism – which seemed sufficiently simple, natural, and coherent that you could (just barely) believe that it was written into the fabric of an otherwise inhuman universe. And perhaps that theory breaks on the rocks of the infinite.
Or perhaps, more generally, infinite ethics reminds us too hard of our cognitive limitations; of the ways in which our everyday morality, for all its pretension to objectivity, emerges from the needs and social dynamics of fleshy creatures on a finite planet; of how few possibilities we are in the habit of actually considering; of how big and strange the world can be. And perhaps this leaves us, if not with nihilism, then with some vague sense of confusion and despair (or perhaps, more concretely, it makes us think we’d have to learn more math to dig into this stuff properly, and we don’t like math).
I don’t think there’s a clean argument from “infinite ethics breaks lots of stuff I like” to “meta-ethical realism is false,” or to some vaguer sense that Cosmos of value hath been reduced to Chaos. But I feel some sympathy for the vibe.
I was already pretty off-board with meta-ethical realism, though (see here and here). And for anti-realists, despairing or giving up in the face of the infinite is less of an option. Anti-realists, after all, are much less prone to nihilism: they were never aiming to approximate, in their action, some ethereal standard that might or might not exist, and which infinities could refute. Rather, anti-realists (or at least, my favored variety) were always choosing how to respond to the world as it is (or might be), and they were turning to ethics centrally as a means of becoming more intentional, clear-eyed, and coherent in their choice-making. That project persists in its urgency, whatever the unboundedness of the world, and of our influence on it. We still need to take responsibility for what we do, and for what it creates. We still harm, or help – only, on larger scales. If we act incoherently, we still step on our own feet, burning what we care about for nothing – only, this time, the losses can be infinite. Perhaps coherence is harder to ensure. But the stakes are higher, too.
The realists might object: for the anti-realist, “we need to take responsibility for how we respond to infinite worlds” is too strong. And fair enough: at the deepest level, the anti-realist doesn’t “need” or “have” to do anything. We can ignore infinities if we want, in the same sense that we can let our muscles go limp, or stay home on election day. What we lose, when we do this, is simply the ability to intentionally steer the world, including the infinite world, in the directions we care about – and we do, I think, care about some infinite things, whatever the challenges this poses. That is: if, in response to the infinite, we simply shrug, or tune out, or wail that all is lost, then we become “passive” about infinite stuff. And to be passive with respect to X is just: to let what happens with X be determined by some set of factors other than our agency. Maybe that’ll work out fine with infinites; but maybe, actually, it won’t. Maybe, if we thought about it more, we’d see that infinities are actually, from our perspective, quite a big deal indeed – a sufficiently big deal that “whatever, this is hard, I’ll ignore it” no longer looks so appealing.
I’m hoping to write more about this distinction between “agency” and “passivity” at some point (see here for some vaguely similar themes). For now I’ll mostly leave it as a gesture. I want to add, though, that given how far away we are (in my opinion) from a satisfying and coherent theory of infinite ethics, I expect that a good amount of the agency we aim at the infinite will remain, for some time, pretty weak-sauce in terms of “steering stuff in consistent directions I’d endorse if I thought about it more.” That is, while I don’t think that we should give up on approaching infinities with intentional agency, I think we should acknowledge that for a while, we’re probably going to suck at it.
XVII. Infinities in practice
I’ll close with a few thoughts on practical implications.
Perhaps we suck at infinite ethics now, both in theory and in practice. Someday, though, we might get better. In particular: if humanity can survive long enough to grow profoundly in wisdom and power, we will be able to understand the ethics here fully – or at least, much more deeply. We’ll also know much more about what sort of infinite things we are able to do, and we’ll be much better able to execute on infinite projects we deem worthwhile (building hyper-computers, creating baby-universes, etc). Or, to the extent we were always doing infinite things (for example, acausally), we’ll be wiser, more skillful, and more empowered on that front, too.
And to be clear: I don’t think that understanding the ethics, here, is going to look like “patching a few counterexamples to expansionism” or “figuring out how to deal with lotteries involving incomparable outcomes.” I’m imagining something closer to: “understanding ~all the math you might ever need, including everything related to all the infinites on the completed version of that crazy chart above; solving all of cosmology, physics, metaphysics, epistemology, and so on, too; probably reconceptualizing everything in fundamentally new and more sophisticated terms — terms that creatures at our current level of cognitive capacity can’t grok; then building up a comprehensive ethics and decision theory (assuming those terms still make sense), informed by this understanding, and encompassing of all the infinities that this understanding makes relevant.” It may well make sense to get started on this project now (or it might not); but we’re not, as it were, a few papers away.
I don’t, though, expect the output of such a completed understanding to be something like: “eh, infinities are tricky, we decided to ignore them,” which as far as I can tell is our current default. To the contrary, I can readily imagine future people being horrified at the casual-ness of our orientation towards the possibility of infinite benefits and harms. “They knew that an infinite number of people is more than any finite number, right? Did they even stop to think about it?” This isn’t to say that future people will be fanatical about infinities (as I noted above, I expect that the right thing to say about fanaticism will emerge even just from considering the finite case). But the argument for taking infinite benefits and harms very seriously isn’t especially complex. It’s the type of thing you can imagine future people being pretty adamant about.
On the other hand, if someone comes to me now and says: “I’m doing X crazy-sounding thing (e.g., quitting my bio-risk job to help break us out of the simulation; converting to Catholicism because it seemed to me slightly more likely than all the other religions; following up on that one drug experience with those infinite spaghetti elves), because of something about infinite ethics,” I’m definitely feeling nervous and bad. As ever with the wackier stuff on this blog (and indeed, even with the less-wacky stuff), my default attitude is: OK (though not risk-free) to incorporate into your worldview in grounded and suitably humble ways; bad to do brittle and stupid stuff for the sake of. I trust a wise and empowered humanity to handle the wacky stuff well (or at least, much better). I trust present-day humans who’ve thought about it for a few hours/weeks/years (including myself) much less. So as a first pass, I think that what it looks like, now, to take infinite ethics seriously is: to help our species make it to a wise and empowered future, and to let our successors take it from there.
That said, I do think that reflection on infinite ethics can (very hazily) inform our backdrop sense of how strange and different a wise future’s priorities might be. In particular: of the options I’ve considered (and setting aside simulation shenanigans), to my mind the most plausible way of doing infinitely good stuff is via exerting optimally wise acausal influence on an infinitely large cosmology. That is, my current attitude towards things like baby-universes and hyper-computers is something like: “hard to totally rule out.” (And I’d say the same thing, in a more skeptical tone, about various religions.) But I’m told that my attitude towards infinitely large cosmologies should be somewhere between: “plausible” and “probably,” and my current attitude towards some sort of acausal decision theory is something like: “best guess view.” So this leaves me, already, with very macroscopic credences on all of my actions exerting infinite amounts of (acausal) influence. It’s hard to really absorb — and I haven’t, partly because I haven’t actually looked into the relevant cosmology. But if I had to guess about where the attention of future infinity-oriented ethical projects would turn, I’d start with this type of thing, rather than with hypercomputers, or Catholicism.
Does this sort of infinite influence, maybe, just add up to normality? Maybe, for example, we use some sort of expansionism to say that you should just make your local environment as good as possible, thereby acausally making an infinite number of other places in the universe better too, thereby improving the whole thing by expansionist lights? If so, then maybe we can just live our finite lives as usual, but in an infinite number of places at once? Our lives would simply carry, on this view, the weight of Nietzsche’s eternal return – only spread out across space-time, rather than in an endless loop. We’d have a chance to confront a version of Nietzsche’s demon in the real world – to find out if we rejoice, or if we gnash our teeth.
I do think we’d confront this demon in some form. But I’m skeptical it would leave our substantive priorities untouched (and anyway, we’d need to settle on a theory of infinite ethics to get this result). In particular, I expect this sort of “acausal influence across the universe” perspective to expand beyond very close copies of you, to include acausal interaction with other inhabitants of the universe (including, perhaps, ones very different from you) whose decisions are nevertheless correlated with yours (see e.g. Oesterheld (2017) for some discussion). And naively, I expect this sort of interaction to get pretty weird.
Even beyond this particular form of weirdness, though, I think visions of future civilizations that put substantive weight on infinity-focused projects are just different in flavor from the ones that emerge from naively extrapolating your favorite finite-ethical views (though even with infinities to the side, I expect such extrapolations to mislead). Thus, for example, total utilitarian types often think that the main game for a wise future is going to be “tiling the accessible universe” with some kind of intrinsically optimal value-structure (e.g., paperclips; oh wait, no…), the marginal value of which stays constant no matter how much you’ve already got. So this sort of view sees e.g. a one-in-a-billion chance of controlling a billion galaxies as equivalent in expected value to a guarantee of one galaxy. But even as infinities cause theoretical problems for total utilitarianism, they also complicate this sort of voracious appetite for resources: relative to “hedonium per unit galaxy,” it is less clear that the success and value of infinity-oriented projects scales linearly with the resources involved (h/t Nick Beckstead for suggesting this consideration) – though obviously, resources are still useful for tons of things (including, e.g., building hypercomputers, acausal bargaining with the aliens – you know, the usual).
All in all, I currently think of infinite ethics as a lesson in humility: humility about how far standard ethical theory extends; humility about what priorities a wise future might bring; humility about just how big the world (both the abstract world, and the concrete world) can be, and how little we might have seen or understood. We need not be pious about such humility. Nor need we preserve or sanctify the ignorance it reflects: to the contrary, we should strive to see further, and more clearly. Still, the puzzles and problems of the infinite can be evidence about brittleness, dogmatism, over-confidence, myopia. If infinities break our ethics, we should pause, and notice our confusion, rather than pushing it under the rug. Confusion, as ever, is a clue.
From Sean Carroll (13:01 here): “Yeah, I’ll just say very quickly, I think that, just so everyone knows, this is an open question in cosmology. … The possibility’s on the table, the universe is infinite, there’s an infinite number of observers of all different kinds, and there’s a possibility on the table that the universe is finite, and there’s not that many observers, we just don’t know right now.”
Bostrom (2011): “Recent cosmological evidence suggests that the world is probably infinite. [continued in footnote] In the standard Big Bang model, assuming the simplest topology (i.e., that space is singly connected), there are three basic possibilities: the universe can be open, flat, or closed. Current data suggests a flat or open universe, although the final verdict is pending. If the universe is either open or flat, then it is spatially infinite at every point in time and the model entails that it contains an infinite number of galaxies, stars, and planets. There exists a common misconception which confuses the universe with the (finite) “observable universe”. But the observable part—the part that could causally affect us— would be just an infinitesimal fraction of the whole. Statements about the “mass of the universe” or the “number of protons in the universe” generally refer to the content of this observable part; see e.g. [1]. Many cosmologists believe that our universe is just one in an infinite ensemble of universes (a multiverse), and this adds to the probability that the world is canonically infinite; for a popular review, see [2].”
Wilkinson (2021): “you might be disappointed to find that the world around you is infinite in the relevant sense. I am sorry to disappoint you, but contemporary physics suggests just that. The widely accepted flat-lambda model predicts that our universe will tend towards a stable state and will then remain in that state for infinite duration (Wald 1983; Carroll 2017). Also widely accepted, the inflationary view posits that our world is spatially infinite, containing infinitely many other ‘bubble’ universes beyond our cosmic horizon (Guth 2007). But that’s not all they predict. Take any small-scale phenomenon which is morally valuable e.g., perhaps a human brain experiencing the thrill of reading philosophy for a given duration. Each of the above physical views predicts that our universe, in its infinite volume, will contain infinitely many such thrills (Garriga and Vilenkin 2001; Linde 2007; de Simone 2010; Carroll 2017).”
I’m ignoring situations where e.g. if I eat a sandwich today, then this changes what happens to an infinite number of Boltzmann brains later, but in a manner I can’t ever predict. That said, this sort of scenario does raise problems: see e.g. Wilkinson (2021) for some discussion.
See also Dyson (1979, p. 455-456) for more on possibilities for infinite computation.
See MacAskill: “It’s not the size of the bucket that matters, but the size of the drop” (p. 25).
This image is partly inspired by Ajeya Cotra’s discussion of the “crazy train” here.
An example from an unpublished paper by Ketan Ramakrishnan: “If this is correct, some other account of suboptimal supererogatory harming is called for. But I have been unable to figure out how such an account would work. And our exhausting casuistical gymnastics suggest that, whatever the best such account turns out to be, its mechanics are likely to prove extremely intricate. Perhaps a satisfying account will eventually be found, of course. But an alternative diagnosis of our predicament is also available. The foundational elements of ordinary, deontological moral thought – stringent duties against harming and using other people without their consent, wide prerogatives to refrain from harming ourselves in order to aid other people – are highly compelling on first inspection. But they prove, on closer view, to be composed of byzantine causal structures whose moral significance is open to serious doubt. Our present difficulties may thus be symptomatic of wider instabilities in the deontological architecture. Perhaps we should renounce any moral view that is built on such intricate casual structures. Perhaps we should just accept, with consequentialists, that “well-being comes first. The weal and woe of human beings comes first.”’