The assertion that an AI would make everything hunky-dory is not falsifiable.
Huh? Of course it's falsifiable. The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of Xyrik's scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or "everyone decides to do the right thing". There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.
I tried to ask him about mechanism here, but the only resp...
How much money would it take to engineer biological immortality for at least half of the world's population, within 20 years, with 99% confidence?