In his interview on the 80000 hours podcast after the publication of the Precipice, Toby Ord claimed that x-risk reduction was worth pursing regardless of weather x-risk was high or low. The claim is that when x-risk is low, it's worth working on x-risk because a small decline in the probability of x-risk has very large returns.
This is because by reducing the value of x-risk you're increasing the value of the future and the value of the future is greatest when x-risk is smallest. On the other hand, when x-risk is large, the return is also high because x-risk is likely to be neglected and therefore the marginal impact of work on x-risk is likely to be very high. This post aims to show the cases when this does and doesn't hold true.
I'll first lay out intuitively the cases where it does and doesn't hold, formalise it these notions and then give some numerical examples.
Intuition
There the are two parts to the claim: the value of the future part and the neglectedness part. The value of the future is the value of humanity per year, divided by the probability of x-risk per year. I'm assuming that the probability of x-risk is constant every year and I'll assume that throughout the post. This has some quite counter-intuitive effects. It means that decreasing the probability of x-risk by 1/10 per year to 1/100 makes the future 10 times more valuable in expectation. However, the much larger percentage point decline in year probability of x-risk from 1/2 to 1/10 only increases the expected value of the future 5 fold.
In general, a decline in x-risk from x to y increases the the value of the future by x/y. This does indeed mean that the value of decreasing x-risk is very large when x-risk is very small, and in fact in increases with square of x-risk. However, if you think that x-risk is very high the value of x-risk reduction is proportionally as small.
There's also a second factor to consider. We can think of probability of an existential risk in a given year as being dependent on all of the inputs going into x-risk reduction. This means that when we change the an input we get two effects - the effect from the change in probability of x-risk in the year, multiplied by the everything the effect of the input is proportional to. For instance, the amount of x-risk per year could be dependent on the product of the amount of research being done and the amount of money being put into the researchers ideas.
In that case, the effect of a small increase in research will be proportional to the product of the amount of money and the likelihood of x-risk at current amount of inputs. The impact of this consideration is that effect of increasing the inputs into x-risk reduction is highest when the probability of x-risk is around 50-50. However, at these value of reducing x-risk is very low because we're almost certainly going to all die if x-risk is anywhere close to 50-50.
The second part of the claim is that x-risk will be more neglected if it's large. There are two conditions that need to be met for neglectedness to increase the value of working on x-risk. The first is that x-risk must be neglected in a variable that we can increase. For instance, if preventing x-risk is proportional to the amount of political capital invested in x-risk, or even worse requires some minimum about of political capital, then increasing the amount of labour going into x-risk will have a small effect.
The second condition is diminishing returns to scale. This means that if all of the inputs into the x-risk reduction production function were all scaled by the same amount, the output of that production is scaled by less than that amount. If diminishing returns to scale holds then, all else equal, the return to putting resources into x-risk reduction is higher the fewer resources are in x-risk reduction.
But it's not at all obvious that that is the case! It could easily be that we need reach a threshold of high quality research done before we can start deploying capital in a productive way. In this case, scaling everything by an amount that meant that the research didn't reach that threshold would be useful, and then the amount that just got us over that threshold hold would be incredibly valuable!
Formalisation
The return to working on existential risk (x-risk) is a function of three variables from a total utilitarian perspective. These are the productivity of the marginal dollar put into x-risk, the current level of existential risk and the expected value of the future.
We can formalise this with a utility function and a production function. The utility function represents the total value of the future. The production function represents how the risk of existential catastrophe varies with various inputs. We can represent the production function with the following expression:
A⋅f(K,L)
K is capital, L is labour and A a productivity parameter.
And a utility function
U=∫∞0u⋅e−rtdt=ur
Where u is the utility we get if we get a civilisation that's able to build a space faring civilisation. For the purposes of this post, I'm going assume that utility is binary, dependent only on weather an x-risk has occurred, where it's 0 when an x-risk has occurred and a constant otherwise.
The utility over time is discounted by r, in this model the probability that an x-risk happens.
We can put the probability than x-risk occurs in year as
G(A⋅f(K,L))
Where G is a CDF. In this model, the probability of extinction is dependent on the output of the production function.
The effect of increasing the amount of, for instance, labour working to reduce x-risk is given by
∂U∂L
This is equal to
∂U∂r⋅∂G∂L
The partial derivative of the probability of x-risk per year with respect to labour is given by the product of the partial derivative of the production function with respect to labour and partial derivative of the CDF - i.e the pdf - evaluated at the current level of input.
∂G∂L=g(A⋅f(K,L)⋅∂A⋅f(K,L)∂L
This gives us the way in which value of working on x-risk depends both on the current value of x-risk, and the levels of other inputs going into x-risk reduction.
Numerical example
Let the production function be
−K0.5L0.5
The probability of x-risk be given by the standard normal distribution cdf of the production function.
Let
K=L=12
This gives a probability of x-risk as 0.3. If set utility per year as equal to 1 this gives an expected value of 3.24
Doubling the amount of labour going into x-risk reduction increases the expected value of future to 4.17
Now, if we imagine that we start with half a unit of capital and 20 units of labour we get an initial value of future of 1268. Increasing the amount of capital to 1 increase the expected value of future to 255754. As you can see, this is a very very very very dramatic increase in the value of the future with the same increase in resources. This post has assumed throughout a constant probability of extinction throughout - it's very unclear if this is the case and it seems extremely worthwhile to do the analysis for a variable rate of extinction, especially if a time of perils model is much much better than a constant x-risk model.
In his interview on the 80000 hours podcast after the publication of the Precipice, Toby Ord claimed that x-risk reduction was worth pursing regardless of weather x-risk was high or low. The claim is that when x-risk is low, it's worth working on x-risk because a small decline in the probability of x-risk has very large returns.
This is because by reducing the value of x-risk you're increasing the value of the future and the value of the future is greatest when x-risk is smallest. On the other hand, when x-risk is large, the return is also high because x-risk is likely to be neglected and therefore the marginal impact of work on x-risk is likely to be very high. This post aims to show the cases when this does and doesn't hold true.
I'll first lay out intuitively the cases where it does and doesn't hold, formalise it these notions and then give some numerical examples.
Intuition
There the are two parts to the claim: the value of the future part and the neglectedness part. The value of the future is the value of humanity per year, divided by the probability of x-risk per year. I'm assuming that the probability of x-risk is constant every year and I'll assume that throughout the post. This has some quite counter-intuitive effects. It means that decreasing the probability of x-risk by 1/10 per year to 1/100 makes the future 10 times more valuable in expectation. However, the much larger percentage point decline in year probability of x-risk from 1/2 to 1/10 only increases the expected value of the future 5 fold.
In general, a decline in x-risk from x to y increases the the value of the future by x/y. This does indeed mean that the value of decreasing x-risk is very large when x-risk is very small, and in fact in increases with square of x-risk. However, if you think that x-risk is very high the value of x-risk reduction is proportionally as small.
There's also a second factor to consider. We can think of probability of an existential risk in a given year as being dependent on all of the inputs going into x-risk reduction. This means that when we change the an input we get two effects - the effect from the change in probability of x-risk in the year, multiplied by the everything the effect of the input is proportional to. For instance, the amount of x-risk per year could be dependent on the product of the amount of research being done and the amount of money being put into the researchers ideas.
In that case, the effect of a small increase in research will be proportional to the product of the amount of money and the likelihood of x-risk at current amount of inputs. The impact of this consideration is that effect of increasing the inputs into x-risk reduction is highest when the probability of x-risk is around 50-50. However, at these value of reducing x-risk is very low because we're almost certainly going to all die if x-risk is anywhere close to 50-50.
The second part of the claim is that x-risk will be more neglected if it's large. There are two conditions that need to be met for neglectedness to increase the value of working on x-risk. The first is that x-risk must be neglected in a variable that we can increase. For instance, if preventing x-risk is proportional to the amount of political capital invested in x-risk, or even worse requires some minimum about of political capital, then increasing the amount of labour going into x-risk will have a small effect.
The second condition is diminishing returns to scale. This means that if all of the inputs into the x-risk reduction production function were all scaled by the same amount, the output of that production is scaled by less than that amount. If diminishing returns to scale holds then, all else equal, the return to putting resources into x-risk reduction is higher the fewer resources are in x-risk reduction.
But it's not at all obvious that that is the case! It could easily be that we need reach a threshold of high quality research done before we can start deploying capital in a productive way. In this case, scaling everything by an amount that meant that the research didn't reach that threshold would be useful, and then the amount that just got us over that threshold hold would be incredibly valuable!
Formalisation
The return to working on existential risk (x-risk) is a function of three variables from a total utilitarian perspective. These are the productivity of the marginal dollar put into x-risk, the current level of existential risk and the expected value of the future.
We can formalise this with a utility function and a production function. The utility function represents the total value of the future. The production function represents how the risk of existential catastrophe varies with various inputs.
A⋅f(K,L)We can represent the production function with the following expression:
K is capital, L is labour and A a productivity parameter.
And a utility function
U=∫∞0u⋅e−rtdt=urWhere u is the utility we get if we get a civilisation that's able to build a space faring civilisation. For the purposes of this post, I'm going assume that utility is binary, dependent only on weather an x-risk has occurred, where it's 0 when an x-risk has occurred and a constant otherwise.
The utility over time is discounted by r, in this model the probability that an x-risk happens.
We can put the probability than x-risk occurs in year as
G(A⋅f(K,L))Where G is a CDF. In this model, the probability of extinction is dependent on the output of the production function.
The effect of increasing the amount of, for instance, labour working to reduce x-risk is given by
∂U∂LThis is equal to
∂U∂r⋅∂G∂LThe partial derivative of the probability of x-risk per year with respect to labour is given by the product of the partial derivative of the production function with respect to labour and partial derivative of the CDF - i.e the pdf - evaluated at the current level of input.
∂G∂L=g(A⋅f(K,L)⋅∂A⋅f(K,L)∂LThis gives us the way in which value of working on x-risk depends both on the current value of x-risk, and the levels of other inputs going into x-risk reduction.
Numerical example
Let the production function be
−K0.5L0.5The probability of x-risk be given by the standard normal distribution cdf of the production function.
Let
K=L=12This gives a probability of x-risk as 0.3. If set utility per year as equal to 1 this gives an expected value of 3.24
Doubling the amount of labour going into x-risk reduction increases the expected value of future to 4.17
Now, if we imagine that we start with half a unit of capital and 20 units of labour we get an initial value of future of 1268. Increasing the amount of capital to 1 increase the expected value of future to 255754. As you can see, this is a very very very very dramatic increase in the value of the future with the same increase in resources. This post has assumed throughout a constant probability of extinction throughout - it's very unclear if this is the case and it seems extremely worthwhile to do the analysis for a variable rate of extinction, especially if a time of perils model is much much better than a constant x-risk model.