I'll repeat myself that I don't believe in Saint Petersburg lotteries:
my honest position towards St. Petersburg lotteries is that they do not exist in "natural units", i.e., counts of objects in physical world.
Reasoning: if you predict with probability p that you encounter St. Petersburg lottery which creates infinite number of happy people on expectation (version of St. Petersburg lottery for total utilitarians), then you should put expectation of number of happy people to infinity now, because E[number of happy people] = p * E[number of happy people due to St. Petersburg lottery] + (1 - p) * E[number of happy people for all other reasons] = p * inf + (1 - p) * E[number of happy people for all other reasons] = inf.
Therefore, if you don't think right now that expected number of future happy people is infinity, then you shouldn't expect St. Petersburg lottery to happen in any point of the future.
Therefore, you should set your utility either in "natural units" or in some "nice" function of "natural units".
In this case, I do think that the number of happy people in expectation is infinite both now, and in the future, for both somewhat trivial reasons, and somewhat more substantive reasons.
The trivial reason is I believe that space is infinite with non-negligible probability, and that's enough to get us to an expected infinity of happy people.
The somewhat more sophisticated reason has to do with the possibility of changing physics, like in Adam Brown's talk, and in general any possibility of the rules being changeable also allows you to introduce possible infinities into things:
I think that here you should re-evaluate what you consider "natural units".
Like, it's clear due to Olbers's paradox and relativity that we live in causally isolated pocket where stuff we can interact with is certainly finite. If the universe is a set of causally isolated bubbles all you have is anthropics over such bubbles.
Why would the value to me personally of existence of happy people be linear in the number of them? Does creating happy person #10000001 [almost] identical to the previous 10000000 as joyous as when the 1st of them was created? I think value is necessary limited. There are always diminishing returns from more of the same...
Most value functions that grow without bound like logarithms or even log log x also tend to infinity, though for you personally, you might think that the value of existence of happy people is bounded, but this isn't true for at least some people (not including myself in the sentence here), so the argument still doesn't work.
I upvote for bringing the useful terminology for that case to the attention that I wasn't aware of.
Then, too much "true/false", too much "should" in what is suggested imho.
In reality, if I, say, choose not to drink the potion, I might still be quite utilitarian in usual decisions, it's just that I don't have the guts or so, or at this very moment I simply have a bit too little empathy with the trillion years of happiness for my future self, so it doesn't match up with my dreading the almost sure death. All this without implying that I really think we ought to discount these trillion years. I just am an imperfect altruist with my future self; I have fear of dying even if it's an imminent death, etc. So it's just a basic preference to reject it, not a grand non-utilitarian theory implied by it. I might in fact even prescribe that potion to others in some situations, but still not like to drink it myself.
So, I think it does NOT follow that I'd have to believe "what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now", at least not just from rejecting this particular potion.
There are mathematical arguments against Expected Value Fanaticism. They point out that a different ontology is required when considering successive decisions over unbounded time and unbounded payoffs. Hence the concepts of multiple bets over time, Kelly betting, and what is now the Standard Bad Example of someone deliberately betting the farm for a chance at the moon and losing. And once you start reasoning about divergent games like St Petersburg, you can arrive at contradictions very easily unless you think carefully about the limiting processes involved. Axioms that sound reasonable when you are only imagining ordinary small bets can go wrong for astronomical bets. Inf+0 = Inf+1 in IEEE 754, but 0 < 1, Inf–Inf is Not a Number, and NaN is not even equal to itself.
I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the resulting life lasted one trillion years, would you drink this potion? According to Expected Value Fanaticism, you should accept gambles like that.
This view may seem, frankly, crazy - but there are some very good arguments in its favor. Basically, if you reject Expected Value Fanaticism, you'll end up violating some very plausible principles. You would have to believe, for example, that what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now, even when we cannot affect those distant events. This seems absurd - we don't need a telescope to decide what we morally ought to do.
However, the story is a bit more complicated than that... Well, read the article! Here's the link: https://utilitarianism.net/gue.../expected-value-fanaticism/