Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
Thanks!
I understand that line of reasoning, but to me it feels similar to the philosophy where one thinks that the principles of rationality should be totally objective and shouldn't involve things like subjective probabilities, so then one settles on a frequentist interpretation of probability and tries to get rid of subjective (Bayesian) probabilities entirely. Which doesn't really work in the real world.
But most people already base their reasoning on an assumption of being the same person tomorrow; if you seriously start making your EU calculations based on the assumption that you're only going to live for one day or for an even shorter period, lots of things are going to get weird and broken, even without my approach.
It doesn't seem all that weird to me; rationality has always been a tool for us to best achieve the things we care about, so its exact form will always be dependent on the things that we care about. The kinds of deals we're willing to consider already depend on how long we expect to live. For example, if you offered me a deal that had a 99% chance of killing me on the spot and a 1% chance of giving me an extra 20 years of healthy life, the rational answer would be to say "no" if it was offered to me now, but "yes" if it was offered to me when I was on my deathbed.
If you say "rationality is dependent on how we formalize the philosophy of identity in the real world", it does sound counter-intuitive, but if you say "you shouldn't make deals that you never expect to be around to benefit from", it doesn't sound quite so weird anymore. If you expected to die in 10 years, you wouldn't make a deal that would give you lots of money in 30. (Of course it could still be rational if someone else you cared about would get the money after your death, but let's assume that you could only collect the payoff personally.)
Using subjective information within a decision-making framework seems fine. The troublesome part is that the idea of 'lifespan' is being used to create the framework.
Making practical decisions about how long I expect to live seems fine and normal currently. If I want an icecream tomorrow, that's not contingent on whether 'tomorrow-me' is the same person as I was today or a different one. My lifespan is uncertain, and a lot of my values might be fulfilled after it ends. Weirdnesses like the possibility of being a Boltmann brain are tricky, but at least they... (read more)