I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests—I think because in the case where it doesn’t cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious), and I don’t trust that the replacements will be actually good, or good for us, or that anything will be reversible.
Even if we don’t die, it still feels like everything is coming to an end.
I realise this is a few months old but personally my vision for utopia looks something like the Culture in the Culture novels by Iain M. Banks. There's a high degree of individual autonomy and people create their own societies organically according to their needs and values. They still have interpersonal struggles and personal danger (if that's the life they want to lead) but in general if they are uncomfortable with their situation they have the option to change it. AI agents are common, but most are limited to approximately human level or below. Some superhuman AI's exist but they are normally involved in larger civilisational manouvering rather than the nitty gritty of individual human lives. I recommend reading it.
Caveats-
1: yes, this is a fictional example so I'm definitely in danger of generalising from fictional evidence. I mostly think about it as a broad template or cluster of attributes society might potentially be able to achieve.
2: I don't think this level of "good" AI is likely.