In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
- Without human intervention, the carrying capacity of our environment (broadly defined1) is finite while there are no *intrinsic* limits on population growth.
- Therefore, if the carrying capacity of our environment is not extended at a sufficient rate to outpace population growth and/or population growth does not slow to a sufficient level that carrying capacity can keep up, carrying capacity will eventually become the limit on population growth.
- Abundant data from zoology show that the mechanisms by which carrying capacity limits population growth include starvation, epidemics, and violent competition for resources. If the momentum of population growth carries it past the carrying capacity an overshoot occurs, meaning that the population size doesn't just remain at a sustainable level but rather plummets drastically, sometimes to the point of extinction.
- The above three assertions imply that human intervention (by expanding the carrying capacity of our environment in various ways and by limiting our birth-rates in various ways) are what have to rely on to prevent the above scenario, let's call it the Malthusian Crunch.
- Just as the Nazis have discredited eugenics, mainstream environmentalists have discredited (at least among rationalists) the concept of finite carrying capacity by giving it a cultish stigma. Moreover, solutions that rely on sweeping, heavy-handed regulation have recieved so much attention (perhaps because the chain of causality is easier to understand) that to many people they seem like the *only* solutions. Finding these solutions unpalatable, they instead reject the problem itself. And by they, I mean us.
- The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the "safety zone" between expansion of carrying capacity and population growth. Moreover, we are close to a level of technology that would allow us to start colonizing the rest of the solar system. Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone. Expanding into those niches won't prevent die-offs on Earth, but will at least be a partial hedge against total extinction and a necessary step toward eventual expansion to other star systems.
Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.
This actionable advice is also 100% justifiable without recourse to claims of superior perception simply by the high value of diversification. Keeping a large sum of money in a single stock's options is really risky, even if you think it's +EV, and even if you think some EMH conditions don't apply (you had insider knowledge the market didn't, the market was not deep or liquid, you had special circumstances, etc). Same reason I keep telling kiba to cash out some of his bitcoins and diversify - I am bullish on Bitcoin, but he should not keep so much of his net worth in a single volatile & risky asset.
MacKay is not the most reliable authority on these matters, you know. The book I mention punctures a few of the myths MacKay peddles.
An anecdote, as you well realize. You recall the hits and forget the misses. How many other bubbles did Jim call over the years? Did his clients on net outperform indices?
And would have grown by how much if they had been in REITs in 2008?
It's not just that you're betting that you can stay solvent longer, you're betting that you have correctly spotted a bubble. There was a guy on the Bitcoin forums who entered into a short contract targeting Bitcoin at $30. Last I heard, he was upside-down by $100k and it was assumed he would not be paying out.
As a matter of fact, someone a while ago emailed me that to try to argue that EMH was false. This is what I said to them:
Speaking of Buffett's magical returns, I found http://www.prospectmagazine.co.uk/economics/secrets-of-warren-buffett/ interesting although I'm not competent to evaluate the research claims.
Pretty much. I believe in inefficiencies in small or niche markets like Bitcoin or prediction markets, but in big bonds or stocks? No way.
I have watched countless people, from Paulson to Spitznagel to Dr Doom to Thiel, lose billions or sell their companies or get out of finance due to failed bets they made on 'obvious' predictions like hyperinflation and 'bubbles' in US Treasuries since that housing bubble which they supposedly called based on their superior rationality & investing skills. It certainly seems like it's harder to exploit. As I said, when you look at complete track records and not isolated examples - do they look like luck & selection effects, or skill & sustained inefficiencies?
I heartily endorse this analysis. I would recommend actually the original paper rather than the review of that paper cited by gwern.
At no point that I could find in this paper did they find that they needed to appeal to luck or random outlier quality to explain Buffett's performance. Indeed, except that it is decades after the fact, it seemed fairly simple for the... (read more)