I attended an Effective Altruism club today where someone had this to say about longtermism.

I have an intuitive feeling that ethical arguments about small probabilities of helping out extremely large numbers (like ) of people are flawed, but I can't construct a good argument for why this is.

The flaw is uncertainty.

In the early 20th century, many intellectuals were worried about population control. The math was simple. People reproduce at an exponential rate. The amount of food we can create is finite. Population growth will eventually outstrip production. Humanity will starve unless population control is implemented by governments.

What actually happened was as surprising as it was counterintuitive. People in rich, industrial countries with access to birth control voluntarily restricted the number of kids we have. Birthrates fell below replacement-level fertility. This process is called the demographic transition.

We now know that if you want to reduce population growth, the best way to do so is to make everyone rich and then provide free birth control. The side effects of this are mostly beneficial too.

China didn't know about the demographic transition when they implemented the one-child policy (一孩政策). The one-child policy wasn't just a human rights disaster involving tens of thousands of forced abortions for the greater good. It was totally unnecessary. The one-child policy was implemented at a time when China was rapidly industrializing. The Chinese birthrate would have naturally dropped below replacement level without government intervention. Chinese birthrates are still below replacement-level fertility even now that the one-child policy has been lifted. China didn't just pay a huge cost to get zero benefit. They paid a huge cost to gain negative benefit. Their age pyramid and sex ratios are extra messed up now. This is the opposite of what effective population control should have accomplished.

China utterly failed to predict its own demographic transition even though demographic changes on time horizons of a few decades are an unusually easy trend to predict. The UN makes extremely precise predictions on population growth. Most trends are harder to predict than population growth. If you're making ethical decisions involving the distant future then you need to make predictions about the distant future. Predictions about the distant future necessarily involve high uncertainty.

In theory, a 10% chance of helping 10 people equals a 0.001% chance of helping out 100,000 people. In practice, they are very different because of uncertainty. In the 10% situation, a 0.1% uncertainty is ignorably small. In the 0.001% situation, a 0.1% uncertainty dominates the calculation. You have a 0.051% chance of doing good and a 0.049% chance of doing harm once uncertainty is factored in. It's statistical malpractice to even write the probabilities as "0.051%" and "0.049%". They both round to 0.05%.

Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm? Maybe, but it's far from a clean argument. Primum non nocere (first, do no harm) matters too. When the success probability of an altruistic action is lower than my baseline uncertainty about reality itself, I let epistemic humility take over by prioritizing more proximate objectives.

New Comment
9 comments, sorted by Click to highlight new comments since:

I don't exactly disagree with anything you wrote but would add:

First, things like "voting for the better candidate in a national election" (assuming you know who that is) have a very small probability (e.g. 1 in a million) of having a big positive counterfactual impact (if the election gets decided by that one vote). Or suppose you donate $1 to a criminal justice reform advocacy charity; what are the odds that the law gets changed because of that extra $1? The original quote was "small probabilities of helping out extremely large numbers" but then you snuck in sign uncertainty in your later discussion ("0.051% chance of doing good and a 0.049% chance of doing harm"). Without the sign uncertainty I think the story would feel quite different.

Second, I think that if you look at the list of interventions that self-described longtermists are actually funding and pursuing right now, I think the vast majority (weighted by $) would be not only absolutely well worth doing, but even in the running for the best possible philanthropic thing to do for the common good, even if you only care about people alive today (including children) having good lives. (E.g. the top two longtermist things are I think pandemic prevention and AGI-apocalypse prevention.) I know people make weird-sounding philosophical cases for these things but I think that's just because EA is full of philosophers who find it fun to talk about that kind of stuff, I think it's not decision-relevant on the current margin whether the number of future humans could be 1e58 vs merely 1e11 or whatever.

I agree that longtermist priorities tend to also be beneficial in the near-term, and that sign uncertainty is perhaps a more central consideration than the initial post lets on.

However, I do want to push back on the voting example. I think the point about small probabilities mattering in an election holds if, as you say, we assume we know who the best candidate is. But it seems unlikely to me that we can ever have such sign certainty on a longtermist time-horizon. 

To illustrate this, I'd like to reconsider the voting example in the context of a long time-horizon. Can we ever know which candidate is best for the longterm future? Even if we imagine a highly incompetent or malicious leader, the flow-through effects of that person's tenure in office are highly unpredictable over the longterm. For any bad leader you identify from the past, a case could be made that the counterfactual where they weren't in power would have been worse. And that's only over years, decades, or centuries. If humanity has a very long future, the longterm impacts are much, much more uncertain than that.

I think we can say some things with reasonable certainty about the long term future. Two examples:

First, if humans go extinct in the next couple decades, they will probably remain extinct ever after.

Second, it's at least possible for a powerful AGI to become a singleton, wipe out or disempower other intelligent life, and remain stably in control of the future for the next bajillion years, including colonizing the galaxy or whatever. After all, AGIs can make perfect copies of themselves, AGIs don't age like humans do, etc. And this hypothetical future singleton AGI is something that might potentially be programmed by humans who are already alive today, as far as anyone knows.

(My point in the second case is not "making a singleton AGI is something we should be trying to do, as a way to influence the long term future". Instead, my point is "making a singleton AGI is something that people might do, whether we want them to or not … and moreover those people might do it really crappily, like without knowing how to control the motivations of the AGI they're making. And if that happens, that could be an extremely negative influence on the very long term future. So that means that one way to have an extremely positive influence on the very long term future is to prevent that bad thing from happening.)

I agree with this, but I think I'm making a somewhat different point. 

An extinction event tomorrow would create significant certainty, in the sense that it determines the future outcome. But its value is still highly uncertain, because the sign of the curtained future is unknown. A bajillion years is a long time, and I don't see any reason to presume that a bajillion years of increasing technological power and divergence from the 21st century human experience will be positive on net. I hope it is, but I don't think my hope resolves the sign uncertainty. 

About this:

People reproduce at an exponential rate. The amount of food we can create is finite. Population growth will eventually outstrip production. Humanity will starve unless population control is implemented by governments.

The calculation and the predictions were correct until the 1960's, including very gloomy views that wars around food would begin happening by the 1980's. What changed things was the Green Revolution. Weren't for this technological breakthrough no one could actually have predicted, and right now we might be looking back at 40 years of wars, plenty more dictatorships and authoritarian regimes all around, some going for multiple wars against their neighbors, others with long running one child policies of their own.

So, in addition to the points you made, I'd add that many times uncertainty comes from "unknown unknowns" such as not knowing what technologies will be developed, while at other times it comes from hoping certain technologies will be developed, betting on them, but then those failing to materialize.

Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm?

I'd say Chesterton's Fence provides a reasonable heuristics for such cases.

Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm? Maybe, but it's far from a clean argument. Primum non nocere (first, do no harm) matters too.

I would like to see a utilitarian argument for why that is the case. To me it seems like you could completely change the best course of action by simply changing your definition of the default action to take.

See also a mathematical result of some relevance: "Convergence of Expected Utilities with Algorithmic Probability Distributions" by Peter de Blanc. He proves that in a certain setting, all expected utility calculations diverge.

There is also Nick Bostrom's paper "Infinite Ethics", analysing the problems of aggregative consequentialism when there is a finite probability of infinite value being at stake, and his "Astronomical Waste" where there are a mere lives at stake, of them going to waste every second we delay our conquest of the universe.

In 2007 Eliezer declared himself confused on the subject. I don't know if he has since found an answer that satisfies him.

However, none of these address the problem of uncertainty, which appears in them only in the probabilities that go into the expected utility calculations. Uncertainty about the probabilities is just folded into the probabilities. In these settings there is no such thing as uncertainty that cannot be expressed as probability.

a 0.501% chance of doing good to a 0.499% chance of doing harm

Nit: You probably mean either "0.501 chance" or "50.1% chance", and likewise for nearby percentages.

Thank you. The numbers were indeed wrong. I have fixed them. What I meant was there is a 99.900% chance of doing nothing, a 0.051% chance of doing good and a 0.049% chance of doing harm.