[ epistemic status: first Less Wrong post, developing hypothesis, seeking feedback and help fleshing out the hypothesis into something that could be researched and about which a discussion paper can be written. A comment/contribution to Eliezer Yudkowsky's "Cognitive biases potentially affecting judgment of global risks" in Bostrom & Cirkovic's "Global Catastrophic Risks" (2008) ]

Most of the Global Catastrophic Risks we face in the 21st century, like anthropogenic climate change, comet and asteroid impacts, pandemics, and uncontrolled artificial intelligence, are high impact (affecting the majority or all of humanity), of terminal intensity (producing mass death, economic and social disruption, and in some cases potential human extinction), and are of highly uncertain probability [1]. This last factor is one major factor making it difficult to bring public attention and political will to bear on mitigating them. This is critical as all of our work/research on AI safety and other issues will be for naught if there is no understanding or will to implement them. Implementation may not require public involvement in some cases (AI safety may be manageable by consensus between AI researchers, for example) but others, like the detection of Earth orbit crossing asteroids and comets, may require significant expenditure to build detectors, etc.

My interest at present is in additional factors that make mustering political and public will even more difficult - given that these are hard problems to interest people in in the first place, what factors make that even more difficult? I believe that the aging of populations in the developed world may be a critical factor, progressively redirecting societal resources from long-term projects like advanced infrastructure, or foundational basic science research (which arguably AI Safety counts as), towards provision of health care and pensions.

Several factors make an aging developed world population a factor in blunting long-term planning:

(1) Older people (age 65+), across the developed world, vote more often than younger people

(2) Voters are more readily mobilized to vote to protect entitlements than to make investments for the future

(3) Older voters have access to, and are more aware of, entitlements than are younger people

(4) Expanding on (3), Benefits and entitlements are of particularly high salience to the aged because of their failure to save adequately for retirement. This trend has been ongoing and seems unlikely to be due to cognitive biases surrounding future planning.

(6) Long term investments, research, and other protections/mitigations against Global Catastrophic Risks will require a tradeoff with providing benefits to present people

(7) Older people have more present focus and less future focus than younger people (to the extent that younger people do - my anecdotal data is that most people interested in the far future of humanity are <50 years old, and a small subset of that <50 year old population). Strangely, even people with grandchildren and great-grandchildren express limited interested in how their descendants will live and how safe their futures will be.

#6 is the point on which I am most uncertain (though I welcome questions and challenges that I should be more uncertain). Unless artificial intelligence and automation in the near term (15-30 years) provide really substantial economic benefits, enough that adequate Global Catastrophic Risk mitigation could be requisitioned without everyone noticing too much (and even then it may be a hard sell), it seems likely that future economic growth will be slower. Older workers, on average (my hunch says ...) are harder to retrain, and harder to motivate to retrain to take new positions, especially if the alternative is state-funded retirement. In a diminished economic future, one not as rich as it would have been with a more stable population pyramid, politics seems likely to focus on zero-sum games of robbing (young) Peter to pay (old) Paul, whether directly through higher taxation or indirectly by under-investing in the future.

Am I jumping ahead of the problem here? Do we not know enough about what it would take to address the different classes of Global Catastrophic and Existential Risk, or is there a reason to focus now on the factors that could prevent us from 'doing something about it'?

New Comment
2 comments, sorted by Click to highlight new comments since:

It feels to me like the topics are very different.

When it comes to global warming the kind of warming that the IPCC projects is inconvenient but it's no global catastrophic risks. It seems to me like all the actual global catastrophic risks that's in climate change is in attempts at geoengineering going horribly wrong. I'm highly uncertain about whether having more public attention on the topic would be helpful.

When it comes to asteroid detection it's a topic where rich people currently are willing to invest money. At current tech levels the funding isn't enough but if Elon is successful with building the BFR, the money would be enough to fund adequate detection. I would expect to have good detection abilities in twenty years.

When it comes to biorisk you already have a large part of the population who's interested in it as evidenced by GMO opposition. The problem is that the opposition is largely ill-informed and not targeted on the actual dangers. What we need is money that's spend on well targeted research but the sums that we need to produce a radical improvement over the status quo aren't that large and competition with health care cost and pensions don't matter much at that scale.

Similarly, people who don't have children and don't realistically hope for extreme longevity have a counterproductive voice in politics. I'd trust intelligent+loving (high investment) grandparents to invest wisely in the future before I'd trust an environmentalist (by way of illustrating the progeny-dependent long-term 'stakes in the game' criteria, not to derail us into boring political territory; similarly, uninvolved sires don't get any credibility).