I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".
Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we st...
A rule-of-thumb I've found use for in similar situations: There are approximately ten billion people alive, of whom it's a safe conclusion that at least one is having a subjective experience that is completely disconnected from objective reality. There is no way to tell that I'm not that one-in-ten-billion. Thus, I can never be more than one minus one-in-ten-billion sure that my sensory experience is even roughly correlated with reality. Thus, it would require extraordinary circumstances for me to have any reason to worry about any probability of less than...
Thanks for these thoughts, Kaj.
It's a worthwhile effort to overcome this problem, but let me offer a mode of criticising it. A lot of people are not going to want the principles of rationality to be contingent on how long you expect to live. There are a bunch of reasons for this. One is that how long you expect to live might not be well-defined. In particular, some people will want to say that there's no right answer to the question of whether you become a new person each time you wake up in the morning, or each time some of your brain cells die. On the o...
I think this simplifies. Not sure, but here's the reasoning:
L (or expected L) is a consequence of S, so not an independent parameter. If R=1, then is this median maximalisation?http://lesswrong.com/r/discussion/lw/mqa/median_utility_rather_than_mean/ it feels close to that, anyway.
I'll think some more...
This is just a complicated way of saying, "Let's use bounded utility." In other words, the fact that people don't want to take deals where they will overall expect to get nothing out of it (in fact), means that they don't value bets of that kind enough to take them. Which means they have bounded utility. Bounded utility is the correct reponse to PM.
I'll need some background here. Why aren't bounded utilities the default assumption? You'd need some extraordinary arguments to convince me that anyone has an unbounded utility function. Yet this post and many others on LW seem to implicitly assume unbounded utility functions.
I like Scott Aaronson's approach for resolving paradoxes that seemingly violate intuitions -- see if the situation makes physical sense.
Like people bring up "blockhead," a big lookup table that can hold an intelligent conversation with you for [length of time], and wonder whether this has ramifications for the Turing test. But blockhead is not really physically realizable for reasonable lengths.
Similarly for creating 10^100 happy lives, how exactly would you go about doing that in our Universe?
Similarly for creating 10^100 happy lives, how exactly would you go about doing that in our Universe?
By some alternative theory of physics that has a, say, .000000000000000000001 probability of being true.
What if one considers the following approach: Let e be a probability small enough that if I were to accept all bets offered to me with probability p<= e then the expected number of such bets that I win is less than one. The approach is to ignore any bet where p <=e.
This solve's Yvain's problem with wearing seatbelts or eating unhealthy for example. It also solves the problem that "sub-dividing" a risk no longer changes whether you ignore the risk.
Rolling all 60 years of bets up into one probability distribution as in your example, we get:
I think what this shows is that the aggregating technique you propose is no different than just dealing with a 1-shot bet. So if you can't solve the one-shot Pascal's mugging, aggregating it won't help in general.
Would that mean that if I expect to have to use transport n times throughout the next m years, with probability p of dying during commuting; and I want to calculate the PEST of, for example, fatal poisoning from canned food f, which I estimate to be able to happen about t times during the same m years, I have to lump the two dangers together and see if it is still <1? I mean, I can work from home and never eat canned food... But this doesn't seem to be what you write about when you talk about different deals.
(Sorry for possibly stupid question.)
I'm probably way late to this thread, but I was thinking about this the other day in the response to a different thread, and considered using the Kelly Criterion to address something like Pascal's Mugging.
Trying to figure out your current 'bankroll' in terms of utility is probably open to intepretation, but for some broad estimates, you could probably use your assets, or your expected free time, or some utility function that included those plus whatever else.
When calculating optimal bet size using the Kelly criterion, you end up with a percentage of yo...
I think the mugger can modify their offer to include "...and I will offer you this deal X times today, so it's in your interest to take the deal every time," where X is sufficiently large, and the amount requested in each individual offer is tiny but calibrated to add up to the amount that the mugger wants. If the odds are a million to one, then to gain $1000, the mugger can request $0.001 a million times.
The way I see it, if one believes that the range of possible extremes of positive or negative values is greater than the range of possible probabilities then one would have reason to treat rare high/low value events as more important than more frequent events.
Some more extreme possibilities on the lifespan problem: Should you figure in the possibility of life extension? The possibility of immortality?
What about Many Worlds? If you count alternate versions of yourself as you, then low probability bets make more sense.
This is an interesting heuristic, but I don't believe that it answers the question of, "What should a rational agent do here?"
The reasoning why one should rely on expected value on one offs can be used to circumvent the reasoning. It is mentioned int he article but I would like to raise it explicitly.
If I personally have a 0.1 chance of getting a high reward within my lifetime then 10 persons like me would on average hit the jackpot once.
Or in the reverse if one takes the conclusion seriously one needs to start rejecting one-offs because there isn't sufficient repetition to tend to the mean. Well you could say that value is personal and thus relevant repetition class is lifetime decisions. But if we take life to be "human value" then the relevant repetition class is choices made by homo sapiens (and possibly beyond).
Say someone offers to create 10^100 happy lives in exchange for something, and you assign them a 0.000000000000000000001 probability to them being capable and willing to carry through their promise. Naively, this has an overwhelmingly positive expected value.
If the stated probability is what you really assign then yes, positive expected value.
I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
Would you give more credibility to someone offering you 10^2 US$ or 10^7 US$?
As an alternative to probability thresholds and bounded utilities (already mentioned in the comments), you could constrain the epistemic model such that for any state and any candidate action, the probability distribution of utility is light-tailed.
The effect is similar to a probability threshold: the tails of the distribution don't dominate the expectation, but this way it is "softer" and more theoretically principled, since light-tailed distributions, like those in the exponential family, are in a certain sense, "natural".
Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.