SforSingularity comments on Optimal Strategies for Reducing Existential Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
I think that there is some genuine confusion here, caused by our false naive ideas about forward-in-time continuity of human consciousness. Naively, we think that there is always a well-defined unique person at any future time that is "me", and that that "future me" defines what I will experience, so we think of an existential catastrophe event as causing the "me" post that event to be some kind of tortured, disembodied soul.
In reality, post the catastrophe, there is not a unique "me".
If we take a many-worlds stance on QM, then if there is a catastrophe with probability p that kills everyone, the multiverse post catastrophe will contain branches that still contain me - in a ratio of (1-p):p. If p is close to 1, this means that most branches do not contain a "me". However, the surviving branches contain 10^LOTS copies of me, because even if 1-p is small, QM branches so much that (1-p)*(total number of branches) will still be a huge number.
But now we have an axiological decision to make: how are we to evaluate the goodness of the outcome? Intuitively, one wants to ask what I will experience, and optimize that. But there are two distinct ways we can formalize this intuition: one is to minimize the probability p of death, the other is to optimize quality of life in those branches that survive, irrespective of p.
Personally, I like the idea of pursuing an average strategy that assigns some importance to number of survivors, and some importance to quality of each survivor; in my case I think that I place a premium on quality.