V_V comments on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (176)
I'd say that if you assign a 10^-22 probability to a theory of physics that allows somebody to create 10^100 happy lives depending on your action, then you doing physics wrong.
If you assign probability 10^-(10^100) to 10^100 lives,10^-(10^1000) to 10^1000 lives, 10^-(10^10000) to 10^10000 lives, and so on, then you are doing physics right and you will not fall for Pascal's Mugging.
There seems to be no obvious reason to assume that the probability falls exactly in proportion to the number of lives saved.
If GiveWell told me they thought that real-life intervention A could save one life with probability PA and real-life intervention B could save a hundred lives with probability PB, I'm pretty sure that dividing PB by 100 would be the wrong move to make.
It is an assumption to make asymptotically (that is, for the tails of the distribution), which is reasonable due to all the nice properties of exponential family distributions.
I'm not implying that.
EDIT:
As a simple example, if you model the number of lives saved by each intervention as a normal distribution, you are immune to Pascal's Muggings. In fact, if your utility is linear in the number of lives saved, you'll just need to compare the means of these distributions and take the maximum. Black swan events at the tails don't affect your decision process.
Using normal distributions may be perhaps appropriate when evaluating GiveWell interventions, but for a general purpose decision process you will have, for each action, a probability distribution over possible future world state trajectories, which when combined with an utility function, will yield a generally complicated and multimodal distribution over utility. But as long as the shape of the distribution at the tails is normal-like, you wouldn't be affected by Pascal's Muggings.
But it looks like the shape of the distributions isn't normal-like? In fact, that's one of the standard EA arguments for why it's important to spend energy on finding the most effective thing you can do: if possible intervention outcomes really were approximately normally distributed, then your exact choice of an intervention wouldn't matter all that much. But actually the distribution of outcomes looks very skewed; to quote The moral imperative towards cost-effectiveness:
I think you misunderstood what I said or I didn't explain myself well: I'm not assuming that the DALY distribution obtained if you choose interventions at random is normal. I'm assuming that for each intervention, the DALY distribution it produces is normal, with an intervention-dependent mean and variance.
I think that for the kind of interventions that GiveWell considers, this is a reasonable assumption: if the number of DALYs produced by each intervention is the result of a sum of many roughly independent variables (e.g. DALYs gained by helping Alice, DALYs gained by helping Bob, etc.) the total should be approximately normally distributed, due to the central limit theorem.
For other types of interventions, e.g. whether to fund a research project, you may want to use a more general family of distributions that allows non-zero skewness (e.g. skew-normal distributions), but as long as the distribution is light-tailed and you don't use extreme values for the parameters, you would not run into Pascal's Mugging issues.