For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.
Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it's accelerated.
I wrote about this in Appendix A of this post. ------ One might look at the rough 50/50 chance at immortality given surviving AGI and think "Wow, I should really speed up AGI so I can make it in time!". But the action space is more something like:
Work on AI safety (transfers probability mass from "die from AGI" to "survive AGI")
The amount of probability transferred is probably at least a few microdooms per person.
Live healthy and don't do dangerous things (transfers probability mass from "die before AGI" to "survive until AGI")
Intuitively, I'm guessing one can transfer around 1 percentage point of probability by doing this.
Do nothing (leaves probability distribution the same)
I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".
I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".
For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1/1,000,000. So, one can ask oneself, "would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?", and I the answer a reasonable person would give in most cases would be no. The biggest crux is just how much one values one millionth of their... (read more)
AKA My Most Likely Reason to Die Young is AI X-Risk
TL;DR: I made a model which takes into account AI timelines, the probability of AI going wrong, and probabilities of dying from other causes. I got that the main “end states” for my life are either dying from AGI due to a lack of AI safety (at 35%), or surviving AGI and living to see aging solved (at 43%).
Meta: I'm posting this under a pseudonym because many people I trust had a strong intuition that I shouldn't post under my real name, and I didn't feel like investing the energy to resolve the disagreement. I'd rather people didn't de-anonymize me.
For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.
Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it's accelerated.