Hm, to be honest, I can't quite wrap my head around the first version. Specifically, we're choosing any sequence of events whatsoever, then if the utilities of the sequence tend to infinity (presumably equivalent to "increase without bound", or maybe "increase monotonically without bound"?), then the expected utilities have to tend to zero? I feel like there's not enough description of the early parts of the sequence. E.g. if it starts off as "going for a walk in nice weather, reading a mediocre book, kissing someone you like, inheriting a lot of money from a relative you don't know or care about as you expected to do, accomplishing something really impressive...", are we supposed to reduce probabilities on this part too? And if not, then do we start when we're threatened with 3^^^3 disutilons, or only if it's 3^^^^3 or more, or something?
I don't think the second version works without setting further restrictions either, although I'm not entirely sure. E.g. choose u = (3^^^^3)^2/e, then clearly u is monotonically decreasing in e, so by the time we get to e = 3^^^^3, we get (approximately) that "an event with utility around 3^^^^3 can have utility at most 3^^^^3" with no further restrictions (since all previous e-u pairs have higher u's, and therefore do not apply to this particular event), so that doesn't actually help us any.
Anyway, it took me something like 20 minutes to decide on that, which mostly suggests that it's been too long since I did actual math. I think the most reasonable and simple solution is to just have a bounded utility function (with the main question of interest being what sort of bound is best). There are definitely some alternative, more complicated, solutions, but we'd have to figure out in what (if any) ways they are actually superior.
Related to: Some of the discussion going on here
In the LW version of Pascal's Mugging, a mugger threatens to simulate and torture people unless you hand over your wallet. Here, the problem is decision-theoretic: as long as you precommit to ignore all threats of blackmail and only accept positive-sum trades, the problem disappears.
However, in Nick Bostrom's version of the problem, the mugger claims to have magic powers and will give Pascal an enormous reward the following day if Pascal gives his money to the mugger. Because the utility promised by the mugger so large, it outweighs Pascal's probability that he is telling the truth. From Bostrom's essay:
As a result, says Bostrom, there is nothing from rationally preventing Pascal from taking the mugger's offer even though it seems intuitively unwise. Unlike the LW version, in this version the problem is epistemic and cannot be solved as easily.
Peter Baumann suggests that this isn't really a problem because Pascal's probability that the mugger is honest should scale with the amount of utility he is being promised. However, as we see in the excerpt above, this isn't always the case because the mugger is using the same mechanism to procure the utility, and our so our belief will be based on the probability that the mugger has access to this mechanism (in this case, magic), not the amount of utility he promises to give. As a result, I believe Baumann's solution to be false.
So, my question is this: is it possible to defuse Bostrom's formulation of Pascal's Mugging? That is, can we solve Pascal's Mugging as an epistemic problem?