(admittedly, I just skimmed the blog post, so I can be easily convinced my tentative position here is wrong)
I'm not sure I see any difference between your proposed isomorphic argument and his argument.
Assuming our level of certainty about risks we can insure against is the same as our level of (un)certainty about existential risks, and assuming the "spending 10 times our annual income" is accurate for both...the arguments sound exactly as "clever" as each other.
I also am not sure I agree with the "boringly obvious and not insightful at all" part. Or rather, I agree that it should be boringly obvious, but given our current obsession with climate change, is it boringly obvious to most people? Or rather, I suppose, the real question is do most people need the question phrased to them in this way to see it?
I guess what I'm saying is that it doesn't seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they'd also say yes and vote for that.
I'm not sure I see any difference between your proposed isomorphic argument and his argument.
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same.
the arguments sound exactly as "clever" as each other.
They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
...I guess what I'm saying is that it doesn't seem implausible to me is that i
The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.