syllogism comments on Is GiveWell.org the best charity (excluding SIAI)? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (59)
Currently I don't think existential risk charities are very appropriate for small-scale individual donations, because of the difficulty of evaluating them. I feel that donating to a long-term research charity is a recipe for either analysis-paralysis or a decision that's ultimately arbitrary. I'll definitely continue gathering information, and see whether I can raise my confidence in an existential risk charity enough to consider donating. I think it will take a lot of research.
For any systemic risk charity, you can give a kind of "Drake equation" that arrives at an estimated dollar-per-life based on a sequence of probability estimates. Off the top of my head, I think the global population estimate for 2050 is around 8 billion, assuming the current trend in reducing the number of people in extreme poverty continues (reducing extreme poverty reduces pop. growth). That means you have to arrive at a probability greater than 8,000,000:1 to get a cost-per-life estimate of under $1,000.
At first glance that odds ratio looks pretty generous. But it's very difficult to have any kind of confidence in the calculation that leads to that. How do I decide between estimates of 10^-3 and 10^-5 likelihood? They're both too small for me to evaluate informally, and there's two orders of magnitude difference there. Is there a page where you lay out these estimates? I've kind of assumed that this existed, but haven't seen it yet.
The above calculation seems to be considering only current people, and not much valuing additional years of happy life for current people or better lives relative to current Western standards. Nick Bostrom's Astronomical Waste paper discusses those issues. Time-discounting isn't enough to wipe out the effect either, since populations may expand very quickly (e.g. brain emulations/artificial wombs and AI teachers).
Gaverick Matheny's paper "Reducing the Risk of Human Extinction" is also relevant, although it arbitrarily caps various things (like the rate of population growth) to limit the dominance of the future.
If you care about bringing future people into being, then the expected future population if we avoid existential risk is many, many orders of magnitude greater than the current population of the world and looms very large.
If you don't care about future people then you have to grapple with the Nonidentity Problem:
Separately, there seems to be a typo in this paragraph of your post:
If you mean "what reduction in the probability of (immediate) extinction is equivalent in expected-lives--of-currently-living-people to saving one life today" then that will be near 1 in 8 billion, not 1 in 8 million. That figure is also a slight underestimate if you only care about curent people because medium-term catastrophes would kill future people who don't yet exist and many current people may have died by then.
Also, if you're looking for easier-to-evaluate charities, or bigger higher-status ones endorsed by folk such as Warren Buffett, foreign policy elites, etc, I suggest the Nuclear Threat Initiative as an existence proof of the possibility of spending on x-risk reduction. I wouldn't recommend giving to it in particular, but it does point to the feasibility of meaningful action. Also see Martin Hellman's on reducing nuclear risk.