I wrote about this in Appendix A of this post.
------
One might look at the rough 50/50 chance at immortality given surviving AGI and think "Wow, I should really speed up AGI so I can make it in time!". But the action space is more something like:
As I said in another comment:
I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".
I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1/1,000,000. So, one can ask oneself, "would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?", and I the answer a reasonable person would give in most cases would be n...
For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.
Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it's accelerated.