I'm not going to give a very full explanation here, but since nobody else has explained at all [EDIT: actually apparently Matt explained correctly below, just not as a top-level answer] I'll at least give some short notes.
Good question!
The answer is that the people on this site should stop doing the thing you are curious about.
It is entirely possible -- and I am tempted to say probable -- under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment. Most writers on this site do not realize that, and the habit people around here have of using "worlds in which humanity emerges relatively unscathed from the crisis caused by AI research" to mean the possibility that or the set of outcomes in which humanity emerges relatively unscathed causes the site to persist in that error.
Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.
Yes, there is a decent chance AFAICT that people very similar to us will survive for many millennia in branches that branched off from our branch centuries ago, and yes, those people are people, too, but personally that decent chance does not significantly reduce my sadness about the possibility that we will all be killed by AI research in our branch.
You ask, "what to read?" I got most of my knowledge of the many-worlds model from Sean Carroll's excellent 2019 book Something Deeply Hidden, the Kindle version of which is only 6 dollars on Amazon.
Despite the similar terminology, people on this site usually aren't talking about the many worlds interpretation of quantum mechanics when they say things like "in 50% of worlds the coin comes up heads".
The overwhelmingly dominant use of probabilities on this website is the subjective Bayesian one i.e. using probabilities to report degrees of belief. You can think of your beliefs about how the coin will turn out as a distribution over possible worlds, and the result of the coin flip as giving you information about which world you inhabit. This turns out to be a nice intuitive way to think about things, especially when it comes to doing an informal version of Bayesian updating in your head.
This has nothing really to do with quantum mechanics. The worlds don't need to have any correspondence to the worlds of the many-worlds interpretation, and I would still think and talk like this regardless of what I believed about QM.
It probably comes from modal logic, where it's standard terminology to talk about worlds which some proposition is true. From a quick google this goes back to at least CI Lewis (1943), which predates the many-worlds interpretation of quantum mechanics, and probably fu...
It is entirely possible—and I am tempted to say probable—under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment
I really doubt this is the case. "Every single one of the branches" is a huge amount of selection power - "billions" is massively underselling it. "Every single one" gives you an essentially unlimited number of coincidences to roll in our favor. So if there is any way in which we can solve alignment at the last minute, or any way to pull off global coordination, or any way in which the chaotic process of AI motivation-formation leads to us not dying, or any way civilization can be derailed before we develop AI, it's highly likely there is at least one future branch in which those things come to pass.
I predict that you will say that the hope come from diversity in future Everett branches
Nope. I believe (a) model uncertainty dominates quantum uncertainty regarding the outcome of AGI, but also (b) it is overwhelmingly likely that there are some future Everett branches where humanity survives. (b) certainly does not imply that these highly-likely-to-exist Everett branches comprise the majority of the probability mass I place on AGI going well.
what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future
I agree that these things shouldn't be conflated. I just think "it is entirely possible that AGI will kill every single one of us in every single future Everett branch" is not a good example to illustrate this, since it is almost certainly false.
Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.
There won't be any iff there is a 100.0000% probability of annihilation. That is higher than EY's estimate. Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.
People on LessWrong often talk in terms of an "informal many-world model". For example they talk about the worlds where the alignment problem is relatively easy vs the worlds where the alignment problem is really hard.
I wonder what are good references for this "informal multiverse model of reality" with multiverses of worlds with different properties? What's the history of this line of thoughts.
(And I also wonder about possible mathematical models of that, how much people are talking about those, and how much people keeping some of those mathematical models in mind? I've seen some sheaf-based models which seemed to be like that.)