In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
- Without human intervention, the carrying capacity of our environment (broadly defined1) is finite while there are no *intrinsic* limits on population growth.
- Therefore, if the carrying capacity of our environment is not extended at a sufficient rate to outpace population growth and/or population growth does not slow to a sufficient level that carrying capacity can keep up, carrying capacity will eventually become the limit on population growth.
- Abundant data from zoology show that the mechanisms by which carrying capacity limits population growth include starvation, epidemics, and violent competition for resources. If the momentum of population growth carries it past the carrying capacity an overshoot occurs, meaning that the population size doesn't just remain at a sustainable level but rather plummets drastically, sometimes to the point of extinction.
- The above three assertions imply that human intervention (by expanding the carrying capacity of our environment in various ways and by limiting our birth-rates in various ways) are what have to rely on to prevent the above scenario, let's call it the Malthusian Crunch.
- Just as the Nazis have discredited eugenics, mainstream environmentalists have discredited (at least among rationalists) the concept of finite carrying capacity by giving it a cultish stigma. Moreover, solutions that rely on sweeping, heavy-handed regulation have recieved so much attention (perhaps because the chain of causality is easier to understand) that to many people they seem like the *only* solutions. Finding these solutions unpalatable, they instead reject the problem itself. And by they, I mean us.
- The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the "safety zone" between expansion of carrying capacity and population growth. Moreover, we are close to a level of technology that would allow us to start colonizing the rest of the solar system. Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone. Expanding into those niches won't prevent die-offs on Earth, but will at least be a partial hedge against total extinction and a necessary step toward eventual expansion to other star systems.
Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.
This looks incoherent. You call overshoot "very likely" and "near extinction due to climate change conditional on overshoot: somewhat likely". Even if I interpret those as .7 and .2 respectively, we wind up with an unconditional probability of at least .14, which I hope is not what you mean by "theoretically possible but unlikely". If that is what you meant then I do not understand how the world looks to you, or why you're not spending this time fundraising for CSER / taking heroin.
Classy.
I have only 5 bins here with which to span everything in (0,1): theoretically possible but unlikely, somewhat likely, likely, very likely, and inevitable. The goal is a rough ranking, at this point, I don't have enough information to meaningfully estimate actual probabilities. You have a good point, though: it would be more self-consistent to say conditional on no overshoot for the first set.
If flaming me is what it takes for you to think seriously about this, then maybe it's worth it.