According to orthodox expected utility theory, the boundedness of the utility function follows from standard decision-theoretic assumptions, like Savage's fairly weak axioms or the von Neumann-Morgenstern continuity/the Archimedean property axiom. Unbounded expected utility maximization violates the sure-thing principle, is vulnerable to Dutch books and is vulnerable to money pumps, all plausibly irrational. See, for example, Paul Christiano's comment with St. Petersburg lotteries (and my response). So, it's pretty plausible that unbounded expected utility maximization is just inevitably formally irrational.
However, I'm not totally sure, since there are some parallels to Newcomb's problem and Parfit's hitchhiker: you'd like to precommit to following a rule ahead of time that leads to the best prospects, but once some event happens, you'd like to break the rule and maximize local value greedily instead. But breaking the rule means you'll end up with worse prospects over the whole sequence of events than if you had followed it. The rules are:
- Newcomb's problem: taking the one box
- Parfit's hitchhiker: paying back the driver
- Christiano's St. Petersburg lotteries: sticking with the best St. Petersburg lottery offered
So, rather than necessarily undermining unbounded expected utility maximization, maybe this is just a problem for "local" expected utility maximization, since there are other reasons you want to be able to precommit to rules, even if you expect to want to be able to break them later. Having to make precommitments shouldn't be decisive against a decision theory.
Still, it seems better to avoid precommitments when possible because they're messy, risky and ad hoc. Bounded utility functions seem like a safer and cleaner solution here; we get a formal proof that they work in idealized scenarios. I also don't even know if precommitments generally solve unbounded utility functions' apparent violations of decision-theoretic principles that bounded utility functions don't have; I may be generalizing too much from one case.
Fair. I've stricken out the "fairly weak". I think this is true of the vNM axioms, too. Still, "completely and extremely physically impossible" to me just usually means very very low probability, not probability 0. We could be wrong about physics. See also Cromwell's rule. So, if you want your theory to cover all extremely unlikely but not actually totally ruled out (probability 0), it really needs to cover a lot. There may some things you can reasonably assign probability 0 to (other than individual events drawn from a continuum, say) or some probability assignments that you aren't forced to consider (they are your subjective probabilities after all), so Savage's axioms could be stronger than necessary.
I don't think it's reasonable to rule out all possible realizations of Christiano's St. Petersburg lotteries, though. You could still ignore these possibilities, and I think this is basically okay, too, but it seems hard to come up with a satisfactory principled reason to do so, so I'd guess it's incompatible with normative realism about decision theory (which I doubt, anyway).