you wouldn't actually be willing [I don't think?] to make such a trade.
Why shouldn't I be? A 10^(-500) chance of utility 10^(750) yields an expected utility of 10^(250). This sounds like a pretty good deal to me, especially when you consider that "expected utility" is the technical term for "how good the deal is".
(I'll note at this point that we're no longer discussing Pascal's mugging, which is a problem in epistemology, about how we know the probability of the mugger's threat is so low; instead, we're discussing ordinary expected utility maximization.)
The mugger also doesn't have to do all the work of raising your probability by a factor of 10^(500), the universe can do most (or all) of it. Remember, your priors are fixed once and for all at the beginning of time.
You postulated that my prior was 10^(-1000), and that the mugger raised it to 10^(-500). If other forces in the universe cooperated with the mugger to accomplish this, I don't see how that changes the decision problem.
In the grand scheme of things, 10^(500) isn't all that much. It's just 1661 bits.
In which case, we can also say that a posterior probability of 10^(-500) is "just" 1661 bits away from even odds.
"expected utility" is the technical term for "how good the deal is".
I know what the definition of utility is. My claim is that there does not exist any event such that you would care about it happening with probability 10^(-500) enough to pay $5.
You postulated that my prior was 10^(-1000), and that the mugger raised it to 10^(-500). If other forces in the universe cooperated with the mugger to accomplish this, I don't see how that changes the decision problem.
You said that you would be okay with losing $5 to a mugger who raised y...
For background, see here.
In a comment on the original Pascal's mugging post, Nick Tarleton writes:
Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)
The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.
This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").
What do folks think of this? Any obvious problems?