SforSingularity comments on Optimal Strategies for Reducing Existential Risk - Less Wrong

3 Post author: FrankAdamek 31 August 2009 03:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: SforSingularity 01 September 2009 02:40:19PM *  0 points [-]

It is possible that embarking upon an exceptionally high-risk, high-reward moneymaking strategy and precommitting most of the profit to a well-chosen collection of existential risk mitigation strategies, such as stock options for SIAI and/or FHI, could increase your subjective probability that your high-risk strategy will succeed from close to zero to close to certainty for anthropic reasons.

Specifically, if you are the only person in the world who is at all likely to donate $1 billion to existential risk mitigation, and the survival probability for planet earth this century is 1% conditional on current meagre funding levels, and 60% conditional on $1 billion in well-targeted, dedicated risk-mitigation funding, then conditioning on your own continued existence, the probability of success for your company/investment can increase by a large factor, depending on various conditional probabilities.

Comment author: Vladimir_Nesov 01 September 2009 07:45:09PM 2 points [-]

Why not condition directly on the successful outcome then? I'm fairly certain it's a confusion to take the above reasoning as an argument for decision-making.

Comment author: SforSingularity 01 September 2009 11:32:40PM *  -2 points [-]

I'm fairly certain it's a confusion to take the above reasoning as an argument for decision-making.

I think that there is some genuine confusion here, caused by our false naive ideas about forward-in-time continuity of human consciousness. Naively, we think that there is always a well-defined unique person at any future time that is "me", and that that "future me" defines what I will experience, so we think of an existential catastrophe event as causing the "me" post that event to be some kind of tortured, disembodied soul.

In reality, post the catastrophe, there is not a unique "me".

If we take a many-worlds stance on QM, then if there is a catastrophe with probability p that kills everyone, the multiverse post catastrophe will contain branches that still contain me - in a ratio of (1-p):p. If p is close to 1, this means that most branches do not contain a "me". However, the surviving branches contain 10^LOTS copies of me, because even if 1-p is small, QM branches so much that (1-p)*(total number of branches) will still be a huge number.

But now we have an axiological decision to make: how are we to evaluate the goodness of the outcome? Intuitively, one wants to ask what I will experience, and optimize that. But there are two distinct ways we can formalize this intuition: one is to minimize the probability p of death, the other is to optimize quality of life in those branches that survive, irrespective of p.

Personally, I like the idea of pursuing an average strategy that assigns some importance to number of survivors, and some importance to quality of each survivor; in my case I think that I place a premium on quality.