The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
If we [respond strongly to all low-probability threats], we spend 10 times GDP.
It's a interesting case of framing bias. If you worry only about climate, it seems sensible to pay a pretty stiff price to avoid a small uncertain catastrophe. But if you worry about small uncertain catastrophes, you spend all you have and more, and it's not clear that climate is the highest on the list...
All in all, I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
The economic case for cutting carbon emissions now is that by paying a bit now, we will make our descendants better off in 100 years.
Once stated this way, carbon taxes are just an investment. But is investing in carbon reduction the most profitable way to transfer wealth to our descendants? Instead of spending say $1 trillion in carbon abatement costs, why don't we invest $1 trillion in stocks? If the 100 year rate of return on stocks is higher than the 100 year rate of return on carbon abatement -- likely -- they come out better off. With a gazillion dollars or so, they can rebuild Manhattan on higher ground. They can afford whatever carbon capture or geoengineering technology crops up to clean up our messes.
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.
I think you're misreading Cochrane. He approvingly quotes Pindyck who says "society cannot afford to respond strongly to all those threats" and points out that picking which ones to respond to is hard. Notably, Cochrane says "I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events."
All that doesn't necessarily imply that you should nothing -- just that selecting the low-probability threats to respond to is not trivial and that our current sociopolitical system is likely to make a mess out of it. Both of these assertions sound true to me.