If global warming gets worse, but people get enough richer, then they could end up better off.
Tautologically, yes.
This is not tautological. Wealth is highly correlated with wellbeing but not logically equivalent.
Global warming is predicted to destroy wealth -- that is the only reason we care about it.
It seems like you have redefined the meaning of some terms here.
This is not tautological. Wealth is highly correlated with wellbeing but not logically equivalent.
The tautology lies in the word "enough".
The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.