The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
If we [respond strongly to all low-probability threats], we spend 10 times GDP.
It's a interesting case of framing bias. If you worry only about climate, it seems sensible to pay a pretty stiff price to avoid a small uncertain catastrophe. But if you worry about small uncertain catastrophes, you spend all you have and more, and it's not clear that climate is the highest on the list...
All in all, I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
The economic case for cutting carbon emissions now is that by paying a bit now, we will make our descendants better off in 100 years.
Once stated this way, carbon taxes are just an investment. But is investing in carbon reduction the most profitable way to transfer wealth to our descendants? Instead of spending say $1 trillion in carbon abatement costs, why don't we invest $1 trillion in stocks? If the 100 year rate of return on stocks is higher than the 100 year rate of return on carbon abatement -- likely -- they come out better off. With a gazillion dollars or so, they can rebuild Manhattan on higher ground. They can afford whatever carbon capture or geoengineering technology crops up to clean up our messes.
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.
Correct. So should that work be done, or should the resources be put to alternative uses?
In other words, would you like to engage with Professor Cochrane's arguments?
Cochrane's arguments don't amount to much. There are two. One is that BIG1 x LOTS > BIG2, the unspecified numbers being respectively the cost of addressing global warning, the number of similarly expensive major threats, and total human resources. No numbers are attached, nor any argument given to establish the inequality. An argument that consists of saying "look how big (or small) this number is!" is worthless unless some effort is made to say specifically why the number is in fact big (or small) enough to do the work demanded of it.
His othe... (read more)