I agree with the idea that the AI will help with existential risk.
First, superintelligence can create a better utopia.
What I'm asking is "What would this utopia have in particular that dath ilan wouldn't have?". The next question then becomes how much better would a society with those things be than a dath ilan-like society. I'm having trouble imagining what the answer to the first question is, so I can't even think about the second one.
Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.