All of ImmortalityOrDeathByAGI's Comments + Replies

For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.

Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it's accelerated.

1Alex K. Chen (parrot)
"10% is overconfident", given huge uncertainty over AGI takeoff (especially the geopolitical landscape of it), and especially given the probability that AGI development may be somehow slowed (https://twitter.com/jachaseyoung/status/1723325057056010680 ) Most longevity researchers will still be super-skeptical if you say AGI is going to solve LEV in our lifetimes (one could say - a la Structure of Scientific Revolutions logic - that most of them have a blindspot for recent AGI progress - but AGI=>LEV is still handwavy logic) Last year's developments were fast enough for me to be somewhat more relaxed on this issue... (however, there is still slowing core aging rate/neuroplasticity loss down, which acts on shorter timelines, and still important if you want to do your best work) https://twitter.com/search?q=from%3A%40RokoMijic%20immortality&src=typed_query I don't know whether to believe, but it's a reasonable take...

I wrote about this in Appendix A of this post.
------
One might look at the rough 50/50 chance at immortality given surviving AGI and think "Wow, I should really speed up AGI so I can make it in time!". But the action space is more something like:

  1. Work on AI safety (transfers probability mass from "die from AGI" to "survive AGI")
    1. The amount of probability transferred is probably at least a few microdooms per person.
  2. Live healthy and don't do dangerous things (transfers probability mass from "die before AGI" to "survive until AGI")
    1. Intuitively, I'm guessing one c
... (read more)
7ImmortalityOrDeathByAGI
For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing. Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it's accelerated.

As I said in another comment:

I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".

I totally buy that we'll see some life expectancy gains before AGI, especially if AGI is more than 10 years away. I mostly just didn't want to make my model more complex, and if we did see life expectancy gains, the main effect this would have is to take probability away from "die before AGI".

For a perfectly selfish actor, I think avoiding death pre-AGI makes sense (as long as the expected value of a post-AGI life is positive, which it might not be if one has a lot of probability mass on s-risks). Like, every micromort of risk you induce (for example, by skiing for one day), would decrease the probability you live in a post-AGI world by roughly 1/1,000,000. So, one can ask oneself, "would I trade this (micromort-inducing) experience for one millionth of my post-AGI life?", and I the answer a reasonable person would give in most cases would be n... (read more)

4Vladimir_Nesov
I think a million years is a weird anchor, starting at 1020 to 1040 might be closer to the mark. Also, there is a multiplier from thinking faster as an upload, so that a million physical years becomes something like 1012 subjective years.