no sense to try to reduce any of them!
I think you're misreading Cochrane. He approvingly quotes Pindyck who says "society cannot afford to respond strongly to all those threats" and points out that picking which ones to respond to is hard. Notably, Cochrane says "I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events."
All that doesn't necessarily imply that you should nothing -- just that selecting the low-probability threats to respond to is not trivial and that our current sociopolitical system is likely to make a mess out of it. Both of these assertions sound true to me.
The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.