Good question!
The answer is that the people on this site should stop doing the thing you are curious about.
It is entirely possible -- and I am tempted to say probable -- under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment. Most writers on this site do not realize that, and the habit people around here have of using "worlds in which humanity emerges relatively unscathed from the crisis caused by AI research" to mean the possibility that or the set of outcomes in which humanity emerges relatively unscathed causes the site to persist in that error.
Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.
Yes, there is a decent chance AFAICT that people very similar to us will survive for many millennia in branches that branched off from our branch centuries ago, and yes, those people are people, too, but personally that decent chance does not significantly reduce my sadness about the possibility that we will all be killed by AI research in our branch.
You ask, "what to read?" I got most of my knowledge of the many-worlds model from Sean Carroll's excellent 2019 book Something Deeply Hidden, the Kindle version of which is only 6 dollars on Amazon.
Bayesian probability (which is the kind Yudkowsky is using when he gives the probability of AI doom) is subjective, referring to one's degree of belief in a proposition, and cannot be 0% or 100%. If you're using probability to refer to the objective proportion of future Everett branches something occurs in, you are using it in a very different way than most, and probabilities in that system cannot be compared to Yudkowsky's probabilities.