I will argue that the prior for such a world should be of the order of 1/n or lower
This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.
The arguments relating to the bandwidth of our sensory system fail to account for (inefficient) encodings of that information which may have some configurations with arbitrarily low likelihood.
There is a positive lower bound to the probability of observing any given data (given a bound on the description length of the data), because you might just be getting random input. Given any observation that could be the result of some 1/3^^^^3 event, it could also just randomly pop into your brain for no reason with probability far greater than that. If you see a mechanism to output a random integer from 1 to 3^^^^3, and that its output was 7, you should be almost 100% confident that there was an error in your senses or your memory or your reasoning that convinced you that the mechanism works as described, etc (where "etc" means "anything other than that you observed the output of a mechanism that generates a random integer from 1 to 3^^^^3, and it was 7").
The point is that in this situation, just paying the mugger and carrying on cannot be the best course of action, because it's not the right choice if they're lying, and if they're not then it's dominated by other much larger considerations. Thus the mugging still fails, not necessarily because of the implausibility of their threat but because of the utter irrelevance of it in the face of unboundedly more important other considerations.
This totally fails to resolve the paradox. The conclusion that you should drop everything else and go all in on pursuing arbitrarily small probabilities of even more vast outcomes is, if anything, even more counter-intuitive than the conclusion that you should give the mugger $5.
Of course this doesn't really resolve the mugging itself. You could modify the scenario to replace myself having to pay with instead a small, plausible but entirely moral threat (e.g. "I'll punch that guy in the face"). I would then be motivated to make the correct moral decision regardless of bounds on my utility (though I suppose my motivation to be correct is itself bounded).
There is no reason that the "moral component" of your utility function must be linear. In fact, the boundedness of your utility function is the correct solution to Pascal's mugging.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The problem with Pascal's mugging is that it IS a fully general counterargument under classical decision theory. That's why it's a paradox right now. But saying "There's a problem with this paradox - therefore, I'll just ignore the problem' is not a solution.
I'm not trying to ignore the problem I'm trying to progress it. If for example I reduce the mugging to just a non-special example of another problem, then I've reduced the number of different problems that need solving by one. Surely that's useful?