I think it's fairly unlikely that suicide becomes impossible in AI catastrophes. The AI would have to be anti-aligned, which means creating such an AI would require precise targeting in the AI design space the same way a Friendly AI does. However, given the extreme disvalue a hyperexistential catastrophe produces, such scenarios are perhaps still worth considering, especially for negative utilitarians.
I think so. By symmetry, imperfect anti-alignment will destroy almost all the disvalue the same way imperfect alignment will destroy almost all the value. Thus, the overwhelming majority of alignment problems are solved by default with regard to hyperexistential risks.
More intuitively, problems become much easier when there isn't a powerful optimization process to push against. E.g. computer security is hard because there are intelligent agents out there trying to break your system, not because cosmic rays will randomly flip some bits in your memory.
Huh, good question. Initially I assumed the answer was "yes, basically" and thought the probability was high enough that it wasn't worth getting into. But the scenarios you mention are making me less sure of that.
I'd love to get input from others on this. It's actually a question I plan on investigating further anyway as I do some research and decided whether or not I want to sign up for cryonics.
Thank you for the post, it was quite a nostalgia trip back to 2015 for me because of all the Wait But Why references. However, my impression is that the Kurzweilian Accelerationism school of thought has largely fallen out of favor in transhumanist circles since that time, with prominent figures like Peter Thiel and Scott Alexander arguing that not only are we not accelerating, we can barely even keep up with 19th century humanity in terms of growth rate. Life expectancy in the US has actually gone down in recent years for the first time.
An important consideration that was left out is temporal discounting. Since you assumed linear scaling of value with post-Singularity QALYs, your result is extremely sensitive to your choice of post-Singularity life expectancy. I felt like it was moot to go into such detailed analysis of the other factors when this one alone could easily vary by ten orders of magnitude. By choosing a sufficiently large yet physically plausible number (such as 100 trillion years), you could justify almost any measure to reduce your risk of dying before Singularity and unambiguously resolve e.g. the question of driving risk.
But I doubt that's a good representation of your actual values. I think you're much more likely to do exponential discounting of future value, such that the integral of value over time remains finite even in the limit of infinite time. This should lead to much more stable results.
I predict that a lot of people will interpret the claim of "you should expect to live for 10k years" as wacky, and not take it seriously.
Really? This is LessWrong after all^^
Thank you for the post, it was quite a nostalgia trip back to 2015 for me because of all the Wait But Why references.
Sure thing! Yeah I felt similar nostalgia. I love and miss Wait But Why.
However, my impression is that the Kurzweilian Accelerationism school of thought has largely fallen out of favor in transhumanist circles since that time, with prominent figures like Peter Thiel and Scott Alexander arguing that not only are we not accelerating, we can barely even keep up with 19th century humanity in terms of growth rate. Life expectancy in the US has actually gone down in recent years for the first time.
To continue the nostalgia:
- The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:
An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:
- Slow growth (the early phase of exponential growth)
- Rapid growth (the late, explosive phase of exponential growth)
- A leveling off as the particular paradigm matures3
If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.
— https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Maybe you're right though. I don't have a good enough understanding to say. Thanks for contributing this thought.
I felt like it was moot to go into such detailed analysis of the other factors when this one alone could easily vary by ten orders of magnitude.
Yeah, in retrospect that does make sense. I think I got lost in the weeds.
An important consideration that was left out is temporal discounting.
...
By choosing a sufficiently yet physically plausible number (such as 100 trillion years), you could justify almost any measure to reduce your risk of dying before Singularity and unambiguously resolve e.g. the question of driving risk.
But I doubt that's a good representation of your actual values. I think you're much more likely to do exponential discounting of future value, making the integral of value over time finite even in the limit of infinite time. This should lead to much more stable results.
Sigh. Yeah, this was a crucial thing that I left out. It might even be the crux. My mistake for not talking about it in the OP.
Here are my thoughts. Descriptively, I see that temporal discounting is something that people do. But prescriptively, I don't see why it's something that we should do. Maybe I am just different, but when I think about, say, 100 year old me vs current 28 year old me, I don't feel like I should prioritize that version less. Like everyone else, there is a big part of me that thinks "Ugh, let me just eat the pizza instead of the salad, forget about future me". But when I think about what I should do, and how I should prioritize future me vs present me, I don't really feel like there should be discounting.
That was for 100 year old me. What about 1k? 10k? 100k? Eh, maybe. The feeling of "screw it, forget that future self" is stronger. But I still question whether that is how I should weigh things. After all, 100k year old me is still just as conscious and real as 28 year old me.
Regret minimization seems relevant, although I've only ever taken a cursory look at the concept. If I didn't weigh 100k year old me enough, it could be something I end up regretting.
Another perspective is the one I take at the end in the Assume The Hypothetical section. If you were going to live to 100k years old, how would you feel? For me it makes me feel like I should really value that future self.
However, once we start talking about stuff like a trillions years and 3^^^3, I start to think, "This is getting crazy, maybe we should be temporally discounting." Why at that point and not at 100k? Idk. Because that's what my intuition is? I guess I don't have a great grasp on this question.
Really? This is LessWrong after all^^
Yeah. I have a lot of respect for LessWrongers in general but I do think that there are some common failure modes, and not Taking Ideas Seriously enough is one of them. (I mean that criticism constructively not contentiously, of course.)
And I think that the reception this post has gotten is decently strong evidence that I was correct (not perfect of course; there are alternative explanations). I see this as a very important topic that doesn't have obvious conclusions, and thus should receive more attention. Even if you are correct about temporal discounting, it is not at all obvious to me how much we should discount and how that affects the final conclusion regarding how much we should value life.
To be sure, I don't actually think whether Accelerationism is right has any effect on the validity of your points. Indeed, there is no telling whether the AI experts from the surveys even believe in Accelerationism. A fast-takeoff model where the world experiences zero growth from now to Singularity, followed by an explosion of productivity would yield essentially the same conclusions as long as the date is the same, and so does any model in between. But I'd still like to take apart the arguments from Wait But Why just for fun:
First, exponential curves are continuous, they don't produce singularities. This is what always confused me about Ray Kurzweil as he likes to point to the smooth exponential improvement in computing yet in the next breath predict the Singularity in 2029. You only get discontinuities when your model predicts superexponential growth, and Moore's law is no evidence for that.
Second, while temporary deviations from the curve can be explained by noise for exponential growth, the same can't be said so easily for superexponential growth. Here, doubling time scales with the countdown to Singularity, and what can be considered "temporary" is highly dependent on how long we have left to go. If we were in 10,000 BC, a slowing growth rate over half a century could indeed be seen as noise. But if we posit the Singularity at 2060, then we have less than 40 years left. As per Scott Alexander, world GDP doubling time has been increasing since 1960. However you look at it, the trend has been deviating from the ideal curve for far, far too long to be a mere fluke.
The most prominent example of many small S-curves adding up to an overall exponential trend line is, again, Moore's law. From the inside view, proponents argue that doomsayers are short sighted because they only see the limits of current techniques, but such limits have appeared many times before since the dawn of computing and each time it was overcome by the introduction of a new technique. For instance, most recently, chip manufacturers have been using increasingly complex photolithography masks to print ever smaller features onto microchips using the same wavelengths of UV light, which isn't sustainable. Then came the crucial breakthrough last year with the introduction of EUV, a novel technique that uses shorter wavelengths and allows the printing of even smaller features with a simple mask, and the refining process can start all over again.
But from the outside view, Moore's law has buckled (notice the past tense). One by one, the trend lines have flattened out, starting with processor frequency in 2006, and most recently with transistors per dollar (Kurzweil's favorite metric) in 2018. Proponents of Moore's law's validity had to keep switching metrics for 1.5 decades, and they have a few left - transistor density for instance, or TOP500 performance. But the noose is tightening, and some truly fundamental limitations such as the Landauer limit are on the horizon. As I often like to say, when straight lines run into physical limitations, physics wins.
Keep in mind that as far as Moore's law goes, this is what death looks like. A trend line never halts abruptly, it's always going to peter out gradually at the end.
By the way, the reason I keep heckling Moore's law is because Moore's law itself is the last remnant of the age of accelerating technological progress. Outside the computing industry, things are looking much more dire.
Here are my thoughts. Descriptively, I see that temporal discounting is something that people do. But prescriptively, I don't see why it's something that we should do. Maybe I am just different, but when I think about, say, 100 year old me vs current 28 year old me, I don't feel like I should prioritize that version less. Like everyone else, there is a big part of me that thinks "Ugh, let me just eat the pizza instead of the salad, forget about future me". But when I think about what I should do, and how I should prioritize future me vs present me, I don't really feel like there should be discounting.
I'm not sure the prescriptive context is meaningful with regard to values. It's like having a preference over preferences. You want whatever you want, and what you should want doesn't matter because you don't actually want that, wherever that should came from. A useful framework to think about this problem is to model your future self as other people and reduce it to the classic egoism-altruism balance. Would you say perfect altruism is the correct position to adopt? Are you therefore a perfect altruist?
You could make up philosophical thought experiments and such to discover how much you actually care about others, but I bet you can't just decide to become a perfect altruist no matter how loudly a philosophy professor might scream at you. Similarly, whether you believe temporal discounting to be the right call or not in the abstract, you can't actually stop doing it; you're not a perfect altruist with respect to your future selves and to dismiss it would only lead to confusion in my opinion.
To be sure, I don't actually think whether Accelerationism is right has any effect on the validity of your points.
Yeah I agree. Seems like it just gives a small bump in the likelihood of living to the singularity, but that has a very small impact on the larger question, because the larger question is a lot more sensitive to other variables like how long a lifespan you'd expect post-singularity, and how much if any temporal discounting should be done.
To the rest of your points about exponential growth, I unfortunately don't understand it enough to really be able to respond, sorry.
I'm not sure the prescriptive context is meaningful with regard to values. It's like having a preference over preferences. You want whatever you want, and what you should want doesn't matter because you don't actually want that, wherever that should came from.
First of all, thank you very much for engaging with me here. This is exactly the sort of thing I was hoping to get in the comments. A good critique that I hadn't thought (enough) about, and one that hits my central point hard, rather than just hitting tangential points that don't have much influence on the central point (although I appreciate those too). I also think you're expressing yourself very clearly, which makes it pleasant to engage with.
The more I think about it, the more I think I should apply some temporal discounting. However, I still lean towards not doing it too much.
In some theoretical sense, I agree that rationality can't tell you what to value, only how to achieve your values (as well as how to figure out what is true). But in a more practical sense, I think that often times you can examine your values and realize that, well, I shouldn't say "there is good reason to change them", but I guess I could say "you find yourself inspired to change them" or "they've just been changed". Like you mention, thought experiments can be a great tool, but I think it's more than just that they help you discover things. I think they can inspire you to change your values. I do agree that it isn't really something that you can just decide to change though.
As an example, consider an immature teenager who doesn't care at all about his future self and just wants to have fun right now. Would you say, "Well, he values what he values."? Probably not.
So then, I think this question of temporal discounting is really one that needs to be explored. It's not enough to just say, "10k years from now? I don't care about that." Maybe we're being immature teenagers.
I think they can inspire you to change your values.
Taken at face value, this statement doesn't make much sense because it immediately begs the question of change according to what, and in what sense isn't that change part of your value already. My guess here is that your mental model says something like "there's a set of primal drives inside my head like eating pizza that I call 'values', and then there are my 'true' values like a healthy lifestyle which my conscious, rational mind posits, and I should change my primal drives to match my 'true' values" (pardon for straw-manning your position, but I need it to make my point).
A much better model in my opinion would be that all these values belong to the same exact category. These "values" or "drives" then duke it out amongst each other, and your conscious mind merely observes and makes up a plausible-sounding socially-acceptable story about your motivations (this is, after all, the evolutionary function of human intelligence in the first place as far as I know), like a press secretary sitting silently in the corner while generals are having a heated debate.
At best, your conscious mind might act as a mediator between these generals, coming up with clever ideas that pushes the Pareto boundary of these competing values so that they can all be satisfied to a greater degree at the same time. Things like "let's try e-cigarettes instead of regular tobacco - maybe it satisfies both our craving for nicotine and our long-term health!".
Even high-falutin values like altruism or long-term health are induced by basic drives like empathy and social status. They are no different to, say, food cravings, not even in terms of inferential distance. Compare for instance "I ate pizza, it was tasty and I felt good" with "I was chastised for eating unhealthily, it felt bad". Is there really any important difference here?
You could of course deny this categorization and insist that only a part of this value set represents your true values. The danger here isn't that you'll end up optimizing for the wrong set of values since who's to tell you what "wrong" is, it's that you'll be perpetually confused about why you keeping failing to act upon your declared "true" values - why your revealed preferences through behavior keep diverging from the stated preferences, and you end up making bad decisions. Decisions that are suboptimal even when judged only against your "true" values, because you have not been leveraging your conscious, rational mind properly by giving it bad epistemics.
As an example, consider an immature teenager who doesn't care at all about his future self and just wants to have fun right now. Would you say, "Well, he values what he values."?
Haha, unfortunately you posed the question to the one guy out of 100 who would gladly answer "Absolutely", followed by "What's wrong with being an immature teenager?"
On a more serious note, it is true that our values often shift over time, but it's unclear to me why that makes regret minimization the correct heuristic. Regret can occur in two ways: One is that we have better information later in life, along the lines of "Oh I should have picked these numbers in last week's lottery instead of the numbers I actually picked". But this is just hindsight and useless to your current self because you don't have access to that knowledge.
The other is through value shift, along the lines of "I just ate a whole pizza and now that my food-craving brain-subassembly has shut up my value function consists mostly of concerns for my long-term health". Even setting temporal discounting aside, I fail to see why your post-dinner-values should take precedence over your pre-dinner-values, or for that matter why deathbed-values should take precedence over teenage-values. They are both equally real moments of conscious experience.
But, since we only ever live and make decisions in the present moment, if you happen to have just finished a pizza, you now have the opportunity to manipulate your future values to match your current values by taking actions that makes the salad option more available the next time pizza-craving comes around by e.g. shopping for ingredients. In AI lingo, you've just made yourself subagent-stable.
My personal anecdote is that as a teenager I did listen to the "mature adults" to study more and spend less time having fun. It was a bad decision according to both my current values and teenage-values, made out of ignorance about how the world operates.
As a final thought, I would give the meta-advice of not trying to think too deeply about normative ethics. Take AlphaGo as a cautionary tale: after 2000 years of pondering, the deepest truths of Go are revealed to be just a linear combination of a bunch of feature vectors. Quite poetic, if you ask me.
I think I may have lead us down the wrong path here. The ultimate question is the one of temporal discounting, and that question depends on how much we do/should value those post-singularity life years. If values can't shift, then there isn't really anything to talk about; you just ask yourself how much you value those years, and then move on. But if they can shift, and you acknowledge that they can, then we can discuss some thought experiments and stuff. It doesn't seem important to discuss whether those shifts are due to discovering more about your pre-existing values, or due to actually changing those pre-existing values.
Haha, unfortunately you posed the question to the one guy out of 100 who would gladly answer "Absolutely", followed by "What's wrong with being an immature teenager?"
Ah, I see. You and I probably just have very different intuitions regarding what to value then, and I sense that thought experiments won't bring us much closer.
Actually, I wonder what you think of this. Are you someone who sees death as a wildly terrible thing (I am)? If so, isn't it because you place a correspondingly high value on the years of life you'd be losing?
The other is through value shift, along the lines of "I just ate a whole pizza and now that my food-craving brain-subassembly has shut up my value function consists mostly of concerns for my long-term health". Even setting temporal discounting aside, I fail to see why your post-dinner-values should take precedence over your pre-dinner-values, or for that matter why deathbed-values should take precedence over teenage-values. They are both equally real moments of conscious experience.
In the pizza example, I think the value shift would moreso be along the lines of "I was prioritizing my current self too much relative to my future selves". Presumably, post-dinner-values would be incorporating pre-dinner-self. Eg. it wouldn't just say, "Screw my past self, my values are only about the present moment and onwards." So I see your current set of values as being the most "accurate", in which case regret minimization seems like it makes sense.
The ultimate question is the one of temporal discounting, and that question depends on how much we do/should value those post-singularity life years. If values can't shift, then there isn't really anything to talk about; you just ask yourself how much you value those years, and then move on. But if they can shift, and you acknowledge that they can, then we can discuss some thought experiments and stuff.
I think we're getting closer to agreement as I'm starting to see what you're getting at. My comment here would be that yes, your values can shift, and they have shifted after thinking hard about what post-Singularity life will be like and getting all excited. But the shift it has caused is a larger multiplier in front of the temporal discounted integral, not the disabling of temporal discounting altogether.
Actually, I wonder what you think of this. Are you someone who sees death as a wildly terrible thing (I am)?
Yes, but I don't think there is any layer of reasoning beneath that preference. Evading death is just something that is very much hard-coded into us by evolution.
In the pizza example, I think the value shift would moreso be along the lines of "I was prioritizing my current self too much relative to my future selves". Presumably, post-dinner-values would be incorporating pre-dinner-self.
I don't think that's true. Crucially, there is no knowledge being gained over the course of dinner, only value shift. It's not like you didn't know beforehand that pizza was unhealthy, or that you will regret your decision. And if post-dinner self does not take explicit steps to manipulate future value, the situation will repeat itself the next day, and the day after, and so on for hundreds of times.
I think we're getting closer to agreement as I'm starting to see what you're getting at. My comment here would be that yes, your values can shift, and they have shifted after thinking hard about what post-Singularity life will be like and getting all excited. But the shift it has caused is a larger multiplier in front of the temporal discounted integral, not the disabling of temporal discounting altogether.
I'm in agreement here! Some follow up questions: what are your thoughts on how much discounting should be done? Relatedly, what are your thoughts on how much we should value life? Is it obvious that past eg. 500 years, it's far enough into the future that it becomes negligible? If not, why aren't these things that are discussed? Also, do you share my impression that people (on LW) largely assume that life expectancy is something like 80 years and life is valued at something like $10M?
Yes, but I don't think there is any layer of reasoning beneath that preference. Evading death is just something that is very much hard-coded into us by evolution.
Regardless of whether it stems from a layer of reasoning or whether it is hard-coded, doesn't it imply that you aren't doing too much temporal discounting? If you did a lot of temporal discounting and didn't value the years beyond eg. 250 years old very much, then death wouldn't be that bad, right?
If an aligned superintelligence can access most of the information I’ve uploaded to the internet, then that should be more than enough data to create a human brain that acts more or less indistinguishably from me - it wouldn’t be exact, but the losses would be in lost memories moreso than personality. Thus, I’m almost certainly going to be revived in any scenario where there is indefinite life extension.
This line of reasoning relies on a notion of identity that is at least somewhat debateable and I’m skeptical of it because it feels like rationalizing so I don’t need to change my behavior. But nonetheless it feels plausible enough to be worth considering.
If I may recommend a book that might make you shift your non-AI related life expectancy: Lifespan by Sinclair.
Quite the fascinating read, my takeaway would be: We might very well not need ASI to reach nigh-indefinite life extension. Accidents of course still happen, so in a non-ASI branch of this world I currently estimate my life expectancy at around 300-5000 years, provided this tech happens in my lifetime (which I think is likely) and given no cryonics/backups/...
(I would like to make it clear that the author barely talks about immortality, more about health and life span, but I suspect that this has to do with decreasing the risk of not being taken seriously. He mentions f.e. millennia old organisms as ones to "learn" from.)
Interestingly, the increase in probability estimation of non-ASI-dependent immortality automatically and drastically impacts the importance of AI safety, since you are a) way more likely to be around (bit selfish, but whatever) when it hits, b) we may actually have the opportunity to take our time (not saying we should drag our feet), so the benefit from taking risks sinks even further, and c) if we get an ASI that is not perfectly aligned, we actually risk our immortality, instead of standing to gain it.
All the best to you, looking forward to meeting you all some time down the line.
(I am certain that the times and locations mentioned by HJPEV will be realized for meet-ups, provided we make it that far.)
The dollar value that I treat my life as being worth is heavily influenced by my wealth.
If I'm currently willing to pay $100k to avoid a 1% chance of dying, that doesn't mean that a 100x increase in my estimate of life expectancy will convince me to pay $100k to avoid a 0.01% chance of dying - that change might bankrupt me.
If I'm currently willing to pay $100k to avoid a 1% chance of dying, that doesn't mean that a 100x increase in my estimate of life expectancy will convince me to pay $100k to avoid a 0.01% chance of dying - that change might bankrupt me.
I'm not following how this example is influenced by your wealth. In both scenarios, you are paying $100k. If it was $100k to avoid a 1% chance vs a $1k to avoid a 0.01% chance, then I see how wealth matters. If you have $105k in savings, paying $100k would bring you down to $5k in savings which is a big deal, whereas paying $1k would bring you down to $104k which isn't too big a deal.
I think this is due to (something like) diminishing marginal utility. But even with that factored in, my sense is that the tremendous value of post-singularity life overwhelms it.
I expect there are a lot more ways to buy a 0.01% risk reduction for $100k.
Let me approach this a different way. Is there anything deterring you from valuing your life at $3^^^3? What behaviors would such a person have that differ from a person who values their life at $1 billion?
What behaviors would such a person have that differ from a person who values their life at $1 billion?
Driving, perhaps. I arrived at something like $2.50/mile at a $10B valuation of life. So for a $1B valuation, that'd be $0.25/mile, which seems reasonable to pay in various situations. But at a $3^^^3 valuation it would no longer be worth it.
Is there anything deterring you from valuing your life at $3^^^3?
With the estimates I made in this post, it doesn't seem reasonable to value it at something crazily high like that.
What, exactly, does it even mean for "you" to exist for 100k years?
Is the "you" from yesterday "you"? Would you be comfortable with your conscious mind being replaced with the conscious mind of that entity? What about the "you" from tomorrow"? What about the "you" from 100k years in the future? If that's still "you", should it be a problem for your mind to be erased, and that mind to be written in its place?
I don't have a great grasp on the question of "what makes you you". However, I do feel solid about "yesterday you" = "present moment you" = "100k years from now you". In which case living for eg. 100k years, there isn't an issue where it isn't you that is alive 100k years from now.
If that's still "you", should it be a problem for your mind to be erased, and that mind to be written in its place?
Yes, I see that as a problem because it'd still be a short lifespan. You wouldn't be alive and conscious from years 30 through 100k. I would like to maximize the amount of years that I am alive and conscious (and happy).
Would you willingly go back in time and re-live your life from the beginning, with all the knowledge you have now? Say, knowing what stocks to purchase, what cryptocurrencies are worth buying and when, being able to breeze through education and skip ahead in life, and all the other advantages you would have?
If the answer to that is yes, then observe that this is exactly the same thing.
The point of this being that you don't actually think of past-you, present-you, and future-you as you in the same sense. You'll happily overwrite past-you with present-you, but you'd see it as a problem if future-you overwrote present-you, so far as to be equatable to dying.
You'll happily overwrite past-you with present-you
Why do you say that? I don't see it as overwriting. I am 28 years old. The way I see it is, I live 28 years, then I go back to the time I was born, then I re-live those 28 years, and so I get to be alive for 56 years.
Until recently, I've largely ignored the question of how long I expect to live, and similarly, how much value I place on my life. Why? For starters, thinking about mortality is scary, and I don't like to do things that are scary unless the benefits outweigh the cost.
So, do they? My answer has always been, "Nah. At least not in the short-to-mid term. I'll probably have to think hard about it at some point in the future though." Whatever the conclusion I draw — high value on life, low value on life — I didn't see how it would change my behavior right now. How the conclusions would be actionable.
Recently I've been changing my mind. There have been various things I've been thinking about that all seem to hinge on this question of how much we should value life.
If you expect (as in expected value) to live eg. 50 years, and thus value life at a typical value of eg. $10M:
But... if you expect (as in expected value) to live something like 10k years:
What else would the implications be of expecting (I'll stop saying "as in expected value" moving forward) to live 10k years? I'm not sure. I'll think more about it another time. For now, Covid, cryonics and driving are more than enough to make this a question that I am interested in exploring.
Focusing on the parts that matter
There are a lot of parameters you could play around with when exploring these questions. Some are way more important than others though. For example, in looking at how risky Covid is, you could spend time exploring whether a QALY should be valued at $200k or $175k, but that won't really impact your final conclusion too much. It won't bring you from "I shouldn't go to that restaurant" to "I should go to that restaurant". On the other hand, moving from an expectation of a 100 year lifespan to a 1,000 year lifespan could very well end up changing your final conclusion, so I think that those are the types of questions we should focus on.
Why not focus on both? The big, and the small. Well, limited time and energy is part of the answer, but it's not the main part. The main part is that the small questions are distracting. They consume cognitive resources. When you have 100 small questions that you are juggling, it is difficult to maintain a view of the bigger picture. But when you reduce something to the cruxiest four big questions, I think it becomes much easier to think about, so that's what I will be trying to do in this post.
Where on earth do you get this 10k number from?
Earth? I didn't get it from earth. I got it from dath ilan.
Just kidding.
The short answer is as follows.
To be honest, it actually surprises me that, from what I can tell, very few other people think like this.
And now for the long answer.
Taking ideas seriously
Before getting into all of this, I want to anticipate something and address it in advance. I predict that a lot of people will interpret the claim of "you should expect to live for 10k years" as wacky, and not take it seriously.
I think this takes various forms. It's a spectrum. On one end of the spectrum are people who dismiss it upfront without ever giving it a chance. On the other end are people who are just the slightest bit biased against it. Who have ventured slightly away from "Do I believe this?" and towards "Must I believe this?".
To move towards the middle on this spectrum, the rationalist skill of Taking Ideas Seriously is necessary (amongst other things). Consider whether you are mentally prepared to take my wacky sounding idea seriously before continuing to read.
(I hope that doesn't sound rude. I lean away from saying things that might sound rude. Probably too far. But here I think it was a pretty important thing to say.)
To some extent (perhaps a large extent), all that I do in this post is attempt to take the singularity seriously. I'm not really providing any unique thoughts or novel insights. I'm just gluing together the work that others have done. Isn't that what all progress is? Standing on the shoulders of giants and stuff? Yes and no. It's one thing to be provided with puzzle pieces. It's another thing for those puzzle pieces to be mostly already connected for you. Where all you need to do is give them a nudge and actually put them together. That's all I feel like I am doing. Nudging those pieces together.
Edit: Shut Up And Multiply is another one that is worth mentioning.
Will the singularity really happen? If so, when?
I don't think this is the type of question that I should try to reason about from first principles. I'm not qualified. I'm just a guy who writes bad JavaScript code for a living. On the other hand, there are experts in the field who have made predictions. Let's hear what they have to say.
Wait But Why
I'll start with some excerpts from the Wait But Why post The Artificial Intelligence Revolution. I think Tim Urban does a great job at breaking things down and that this is a good starting place.
Grace et al
The Wait But Why post largely focused on that Mueller and Bostrom survey. That's just one survey though. And it took place between 2012 and 2014. Can this survey be corroborated with a different survey? Is there anything more recent?
Yes, and yes. In 2016-2017, Grace et al surveyed 1634 experts.
That seems close enough to the Mueller and Bostrom survey, which surveyed 550 people, to count as corroboration.
Astral Codex Ten
There's an Astral Codex Ten post called Updated Look At Long-Term AI Risks that discusses a recent 2021 survey. That survey was 1) smaller and 2) only surveyed people in AI safety related fields rather than people who work in AI more broadly. But more to the point, 3) it didn't explicitly ask about timelines, from what I could tell. Still, I get the vibe from this survey that there haven't been any drastic changes in opinions on timelines since the Bostrom and Grace surveys.
Less Wrong
I also get that vibe from keeping an eye on LessWrong posts over time. If opinions on timelines changed drastically, or even moderately, I'd expect to be able to recall reading about it on LessWrong, and I do not. Absence of evidence is evidence of absense. Perhaps it isn't strong evidence, but it seems worth mentioning. Actually, it seems moderately strong.
Luke Muehlhauser/MIRI
There is a blog post I really liked called When Will AI Be Created? that was published on MIRI's website in 2013, and authored by Luke Muehlhauser. I take it to be representative of Luke's views of course, but also reasonably representative of the views of MIRI more broadly. Which is pretty cool. I like MIRI a lot. If there was even moderate disagreement from others at MIRI, I would expect the post to have been either altered or not published.
In the first footnote, Luke talks about lots of surveys that have been done on AI timelines. He's a thorough guy and this is a MIRI blog post rather than a personal blog post, so I expect that this footnote is a good overview of what existed at the time. And it seems pretty similar to the Bostrom and the Grace surveys. So then, I suppose at this point we have a pretty solid grasp on what the AI experts think.
But can we trust them? Good question! Luke asks and answers it.
Damn, that's disappointing to hear.
I wonder whether we should expect expert predictions to be underconfident or overconfident here. On the one hand, people tend to be overconfident due to the planning fallacy, and I sense that experts fall for this about as badly as normal people do. On the other hand, people underestimate the power of exponential growth. I think experts probably do a much better job at avoiding this than the rest of us, but still, exponential growth is so hard to take seriously.
So, we've got Planning Fallacy vs Exponential Growth. Who wins? I don't know. I lean slightly towards Planning Fallacy, but it's tough.
Anyway, what else can we do aside from surveying experts? Luke proposes trend extrapolation. But this is also tough to do.
Well, this sucks. Still, we can't just throw our arms up in despair. We have to work with what we've got. And Luke does just that.
That makes enough sense to me. I think I will adopt those beliefs myself. I trust Luke. I trust MIRI. The thought process seems good. I lean towards thinking timelines are longer than what the experts predict. I like that he was thorough in his survey of surveys of experts, and that he considered the question of whether surveying experts is even the right move in the first place. This is fine for now.
Will you be alive for the singularity?
It doesn't actually matter all that much for our purposes when exactly the singularity happens. The real question is whether or not you will be alive for it. If you are alive, you get to benefit from the drastic increases in lifespan that will follow. If not, you won't.
Warning: Very handwavvy math is forthcoming.
Baseline
Let's say you are 30 years old. From the Wait But Why article, the median expert prediction for ASI is 2060. Which is roughly 40 years from now. Let's assume there is a 50% chance we get ASI at or before then. A 30 year old will be 70 in 2060. Let's assume they are still alive at the age of 70. With this logic, a 30 year old has at least a 50% chance of being alive for the singularity. Let's use this as a starting point and then make some adjustments.
Life expectancy of 85 years
Suppose that we don't reach ASI by 2060. Suppose it takes longer. Well, you'll only be 70 years old in 2060. People are currently living to ~85 years old right now, so let's say you have another 15 years to wait on ASI. I'll eyeball that at bumping us up to a 60% chance at being alive for the singularity.
Modest increases in life expectancy
There's something I've never understood about life expectancy. Let's say that people right now, in the year of 2021, are dying around age 85. That means that people who were born in 1936 can expect to live 85 years. But if you're 30 years old in the year 2021, that means you were born in 1991. Shouldn't someone who was born in the year 1991 have a longer life expectancy than someone who was born in the year 1936?
I think the answer has got to be "yes". Let's assume it's an extra 15 years. That a 30 year old today can expect to live to be 100. And let's say this gives us another 10% boost for our chances of being alive for the singularity. From 60% to 70%.
Does this 70% number hold water? Let's do a little sanity check.
If we're expecting to live another 70 years, that'll be the year 2090 (to use a round number). From Bostrom's survey:
From there it'll take some time to reach ASI. I'm just going to wave my hands and say that yeah, the 70% number passes the sanity check. Onwards.
Exponential increases in life expectancy
In the previous section, we realized that someone born in the year 1991 will probably live longer than someone born in the year 1936. Duh.
But how much longer? In the previous section, I assumed a modest increase of 15 years. I think that is too conservative though. Check this out:
Technology progresses exponentially, not linearly. Tim Urban, as always, gives us a nice, intuitive explanation of this.
Bostrom gives us a similar explanation in his book Superintelligence:
So, let's say you buy this idea of exponential increases in technology. How does that affect life expectancy? With the modest increases in the previous section we went from 85 years to 100 years. How much should we bump it up once we factor in this exponential stuff?
I don't know. Remember, artificial general intelligence comes before ASI, and AGI has got to mean lots of cool bumps in life expectancy. So... 150 years? I really don't know. That's probably undershooting it. I'm ok with eyeballing it at 150 years though. Let's do that.
An 150 year life expectancy would mean living to the year 2140. If we look at the expert surveys and then add some buffer, it seems like there'd be at least a 90% chance of living to ASI.
Didn't you say that you expect expert surveys on AI timelines to be overconfident?
Yes, I did. And I know that I referenced those same surveys a lot in this section. I just found it easier that way, and I don't think it really changes things.
Here's what I mean. I think my discussion above already has a decent amount of buffer. I think it undersells the increases in life expectancy that will happen. I think this underselling more than makes up for the fact that I was using AI timelines that were too optimistic.
Also, I'm skipping ahead here, but as we'll see later on in the post, even if the real number is something like 60% instead of 90%, it doesn't actually change things much. Orders of magnitude are what will ultimately matter.
Assuming you live to the singularity, how long would you expect to live?
Friendly reminder: Taking Ideas Seriously.
Unfortunately, this is a question that is both 1) really important and 2) really difficult to answer. I spent some time googling around and didn't really find anything. I wish there were lots of expert surveys available for this like there are for AI timelines.
Again, we can't just throw our arms up in despair at this situation. We have to make our best guesses and work with them. That's how bayesian probability works. That's how expected value works.
For starters, let's remind ourselves of how totally insane ASI is.
It's CRAZY powerful.
Because of this power, Bostrom as well as other scientists believe that it could very well lead to our immortality. How's that for an answer to the question of life extension?
And here's Feynman:
And Kurzweil:
Remember when I said this?
Doesn't sound so crazy now does it?
So, what value should we use? Bostrom and others say infinity. But there's some probability that they are wrong. But some percent of infinity is still infinity! But that's an idea even I am not ready to take seriously.
Let's ask the question again: what value should we use? A trillion? A billion? A million? 100k? 10k? 1k? 100? Let's just say 100k for now, and then revisit this question later. I want to be conservative, and it's a question I'm really struggling to answer.
What will the singularity be like?
So far we've asked "Will you be alive for the singularity?" and "Assuming you live to the singularity, how long would you expect to live?". We have preliminary answers of "90% chance" and "100k" years. This gives us a rough life expectancy of 90k years, which sounds great! But there are still more questions to ask.
Utopia or dystopia?
What if we knew for a fact that the singularity would be a dystopia. A shitty place to be. An evil robot just tortures everyone all day long. You'd rather be dead than endure it.
Well, in that case, that 90% chance at living an extra 100k years doesn't sound so good. You'd choose to end your life instead of living in that post-singularity world, so you're not actually going to live those 100k years. You're just going to live the 100 years or whatever until we have a singularity, and then commit suicide. So your life expectancy would just be a normal 100 years.
Now let's suppose that there is a 20% chance that the post-singularity world is a good place to be. Well, now we can say that there is a:
90% * 20% = 18%
chance that you will live another 100k pleasant years90% * 80% = 72%
chance that you live to the singularity but it sucks and you commit suicide and thus live your normal 100 year lifespan10%
chance that you don't make it to the singularity at allThe last two outcomes are insignificant enough that we can forget about them. The main thing is that 18% chance of living 100k pleasant years. That is an expectation of 18k years. A 100 year lifespan is small potatoes compared to that.
The point of this is to demonstrate that we have to ask this question of utopia or dystopia. How likely is it that we actually want to live in the post-singularity world?
Well, in reality, there are more than two possible outcomes. Maybe there's a:
There's a lot more possibilities than that actually, and a more thorough analysis would incorporate all of them, but here I think it makes sense as a simplification to assume that there are only two possible outcomes.
So then, what is the probability that we get the good outcome?
I have no clue. Again, I'm just a guy who writes repetitive CRUD code for web apps, and they never taught me any of this in my coding bootcamp. So let's turn to the experts again.
Fortunately, Bostrom asked this question in his survey of AI experts. Let's let Tim Urban break it down for us again.
I agree with Urban about the neutral percentage being lower if it were asking about ASI instead of AGI. Since we're being handwavvy with our math, let's just assume that the respondents would assign a
52 / (52 + 31) = 63% ~ 60%
chance of ASI leading to a good outcome. With that, we can use 60% instead of 20%, and say that we can expect to live90% * 60% * 100k years = 54k years
. Pretty good!How much is a SALY worth?
There is this term that people use called a QALY. It is pronounced "qually" and stands for "quality adjusted life year". The idea is that life years aren't created equally. Losing the years from 85 to 90 years old when you have cancer isn't as bad as losing the years from 25 to 30 when you're in the prime of your life. Maybe the latter years are worth $200k each to you while the former are only worth $30k.
I want to use a different term: SALY. Singularity adjusted life year. How much would a post-singularity year be worth?
Well, it is common nowadays to assign a value of $200k to a year of life. What about after the singularity? If we get the good outcome instead of the bad outcome, the singularity seems like it'll be totally fucking awesome. I'll hand it back over to Tim Urban again to explain.
How much more awesome is that world than todays world? I don't know. It sounds pretty awesome to me. I could see there being legitimate arguments for it being 10x or 100x more awesome, and thus SALYs would be worth $2M or $20M respectively (since we're saying QALYs are worth $200k). I could even see arguments that push things up a few more orders of magnitude. Remember, this is a world where a god-like superintelligence has fine-grained control over the world at the nanometer scale.
Personally, I suspect something like 100x or more, but it's a hard idea to take seriously. Let's just say that a SALY is worth a humble $500k and move forward. We could revisit this assumption in the future if we want to.
Why would anyone want to live for that long?
Check out "Skeptic Type 5: The person who, regardless of whether cryonics can work or not, thinks it’s a bad thing" in Why Cryonics Makes Sense.
Piecing things together
We've done a lot of work so far. Let's zoom out and get a feel for the big picture.
We are trying to see how much value we should place on life. Answering that question depends on a lot of stuff. You could approach it from various angles. I like to look at the following parameters:
Here are my preliminary answers:
Here is what those answers imply:
90% * 60% = 54%
chance that you find yourself alive in an awesome post-singularity world.54% * 100k years = 54k (post-singularity) years
.54k * $500k = $27B
.There's also those meager ~100 pre-singularity years worth ~$200k each, so your pre-singularity years are worth something like $20M, but that is pretty negligible next to the $27B value of your post-singularity years, right? Right. So let's not think about that pre-singularity stuff.
Wiggle room
Note that this $27B figure gives us a lot of wiggle room if we're making the argument that life is crazy valuable.
Let's say it is $10B to make the math easier. We normally value life at about $10M. That is three orders of magnitude less. So then, our $27B could be off by a full two orders of magnitude, and life would still be a full order of magnitude more valuable than we currently value it. Eg. maybe you only think there's a 10% chance of living to the singularity, and an expectation of living 10k years post-singularity instead of 100k. Those seem like pretty conservative estimates, but even if you use them, the normal values that we place on life would still be off by an order of magnitude.
On top of this, I was trying to be conservative in my initial assumptions.
Professor Quirrell
Professor Quirrell from HPMoR is one of my favorite people in the world. When I'm lucky, he makes his way into my head and reminds me that people suck and that I should be pessimistic. Let's see what happens if I listen to him here.
Let's try out some Professor Quirrell adjusted values here:
This gives us a value of
10% * 10% * 10k years * $1M/year = $100M
. So, one order of magnitude larger than the typical value of life.Responding to Professor Quirrell
Thank you for your input.
How likely is it that you live to the singularity?
Damn, I totally failed to think about the possibility that you die due to some existential risk type of thing before the singularity. My initial analysis was just focused on dying of natural causes. I'm glad I talked to you.
Let's start by looking at expert surveys on how likely it is that you die of existential risk types of stuff. I spent some time googling, and it wasn't too fruitful. In Bostrom's Existential Risk Prevention as Global Priority paper from 2012, he opens with the following:
Which points to the following footnote:
This is only talking about existential risks though. It's also possible that I die eg. if Russia bombs the US or something. Or if there is a new pandemic that I'm not able to protect myself from. Things that wouldn't necessarily be considered existential risks. So that pushes the probability of me dying before the singularity up. On the other hand, those estimates presumably factor in existential risk from unfriendly AI, so that pushes the probabilty of me dying before the singularity down. Let's just call those two considerations a wash.
Let's look at another piece of evidence. In Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, Bostrom says the following:
Hm, that's not good to hear. What does "considerably higher" mean? 80%?
What proportion of this risk comes from UFAI? It sounds like it's not that much. Bostrom says:
And then he ranks UFAI fourth on the list. So I don't think we have to filter this out much. Let's just suppose that Bostrom's best personal estimate of something other than UFAI is 70%.
Let's look at The Case For Reducing Existential Risks from 80,000 Hours now. Bingo! This has some good information.
It says:
And provides the following table:
It looks like it might be referencing that same survey Bostrom mentioned though, and we can't double count evidence. However, the article also gives us these data points:
That's good... but weird. Why are their estimates all so low when Bostrom said anything under 25% is misguided? I feel like I am missing something. I think highly of the people at 80,000 Hours, but I also think highly of Bostrom.
Let's just call it 30%, acknowledge that it might be more like 80%, and move on.
So what does that mean overall for this parameter of "How likely is it that you live to the singularity?"? Well, we said before that there was a 10% chance that you don't make it there due to dying of something "normal", like natural causes. Now we're saying that there's a 30% chance or so of dying of something "crazy" like nuclear war. I think we're fine adding those two numbers up to get a 40% chance of dying before the singularity, and thus a 60% chance of living to the singularity. Less than our initial 90%, but more than Professor Quirrell's pessimistic 10%.
How likely is it that the singularity will be good rather than bad?
Professor Quirrell says the following:
So then, the question is whether you think the experts underestimated the difficulty. If you think they didn't factor it in enough, you can lower your confidence accordingly.
(Warning: Spoilers for HPMoR in the following quote)
I could definitely see the experts underestimating the difficulty and would be happy to adjust downwards in response. It's hard to say how much though. I spent some time looking through Eliezer's twitter and stuff. I have a lot of respect for his opinions, so hearing from him might cause me to adjust downwards, and I could have sworn there was a thread from him somewhere expressing pessimism. I couldn't find it though, and his pessimism doesn't seem too different from the mainstream, so I don't know how much I can adjust downwards from the 60% number we had from the surveys. I'll eyeball it at 40%.
How many years do you expect to live post-singularity?
I think even Professor Quirrell can be convinced that we should assign some probability to having a life expectancy of something like 10B years. Even a 1 in 1000 chance of living that long is an expectation of 10M years. Remember, immortality was the answer that Nick Bostrom gave here. He is a very prominent and trustworthy figure here, and we're merely giving a 0.1% chance of him being right.
But we don't have to go that far. Again, maybe we should go that far, but 100k years seems fine as a figure to use for this parameter, despite Professor Quirrell's pessimism.
How valuable are each of those years?
I appreciate the respect. Let's just return to $500k/year though.
New result
Our new values are:
That gives us a total value of $12B this time. In contrast to $27B the first time, $100M for Professor Quirrell, and $10M as the default we use today. So this time we're about three orders of magnitude higher than the baseline.
Adjusting for cryonics
All of this assumes that you aren't signed up for cryonics. Unfortunately, very few people are actually signed up, so this assumption is usually going to be true. But if you are signed up, it does change things.
We're talking about the possibility of a uptopian singularity here. If you die before then and are cryonically frozen, it seems pretty likely that when the singularity comes, you'll be revived.
Suppose that you die in the year 2020, are frozen, the singularity happens in the year 2220, they reach the ability to revive you in 2230, and then you live an extra 100k years after that. In that scenario, you only lose out on those 210 years from 2020 to 2230. 210 years is negligible compared to 100k, so dying, being cryonically frozen, and then being revived is basically the same thing as not dying at all.
When I first realized this, I was very excited. For a few moments I felt a huge feeling of relief. I felt like I can finally approach things like a normal human does, and stop this extreme caution regarding death. Then I realized that cryonics might not work. Duh.
How likely is it to succeed? I don't know. Again, let's look at some data points.
It seems like we're at something like a 30% chance it succeeds, let's say. So it's like there's a 30% chance we get those 100k years that we lost back. Or a 70% chance that we actually lose the years. So using our updated values:
Wait, that's just 70% of the $12B we had before. I'm bad at math.
So ultimately, cryonics helps but it is not enough to significantly change the conclusion.
Implications
Let's say we use $10B as the value of life. It's a round number and it's roughly in betweeen our $12B and $8.4B numbers. What are the implications of that? I'll list two.
Covid
For a healthy, vaccinated, 30 year old, what are the chances that you die after being infected with covid? Personally, I refer to this post for an answer. It says 0.004%, but acknowledges that it is very tough to calculate. Let's go with 0.004%.
How much does a microcovid cost? Well, since we're using 0.004%, a microcovid is equal to a
0.000001 * 0.00004 = 4 * 10^-11
chance of dying. And since we're valuaing life at $10B, that is a4 * 10^-11
chance of losing $10B. So a microcovid costs4 * 10^-11 * $10B = $0.40
.You can go on https://microcovid.org to see what that means for various activities in your area, but to use an example, for me, it means that eating indoors costs $400, which clearly is not worth it. Even if we went down an order of magnitude, paying an extra $40 to eat indoors is something I'd almost always avoid. On top of that, there are wide error bars everywhere. Personally, that makes me even more hesitant.
Driving
I wrote a post about this. Check it out. Let's reason through it a little bit differently here though.
In that post I looked at the cost per year of driving. Since writing it, I realized that looking at something like the cost per mile is more useful, so let's try that.
In the US, there is about one fatality per 100M vehicle miles traveled (VMT). So by traveling one mile, that's like a 1 in 100M chance of dying. Or a 1 in 100M chance of losing $10B, since that's what we valued life at. So then, driving a mile costs
1/100M * $10B = $10
. I have a hard time imagining realistic scenarios where that would be worth it.But you are a safer driver than average, right? How much does that reduce your risk? In the post, I observe the fact that there is a 2.5-to-1 ratio of non-alcohol to alcohol related fatalities, wave my hand, and say that maybe as a safer driver you only take 1/4 of the risk. So
$2.50/mile
. That's starting to seem reasonable. It's right in line with what an Uber would cost, and a 10 mile trip would be $25. But still, it's not something you'd want to do willy nilly.Since writing the post, I realized something important that is worth noting. I was in NYC and took an Uber up to Central Park. It might have been the craziest ride of my life. The driver was swerving, accelerating, stopping short, getting really close to other cars. However, the speeds were only like 10-20mph. I can't see an acident in that context causing death. And I think the same thing is true in various other contexts of lower speed traffic.
A cursory google search seems to support that. The risk of a given collision resulting in death is definitely higher at higher speeds. But collisions seem to happen more frequently at lower speeds, and people still die of those. I'm not clear on how these factors balance each other out. I didn't spend that much time looking into it. Eyeballing it, maybe taking a local trip at slow speeds is
$1.00/mile
and taking a trip on the highway is$5.00/mile
.Hypocrisy
I anticipate something like this being a highly upvoted comment, so I may as well respond to it now.
Salads are better for you than pizza. You know you should be eating salads, but sometimes, perhaps often times, you find yourself eating pizza. Oops.
Imagine that someone came up to you and said:
I think the logical response is:
Also, I haven't had the chance to research what the common death-related risks are in everyday life, other than cars and covid. If someone can inform me, that would be great.
What if life is even more valuable?
I've been trying to be conservative in my assumptions. But what if I am off by orders of magnitude in that direction? What if life should be valued at ten trillion dollars instead of ten billion.
That could very well be the case if we bump up our estimates of how many post-singularity years we'd live, and of how valuable each of those years are. If life is actually this valuable, perhaps we should do even more to avoid death than avoiding cars and covid.
Facing reality
It is one thing to have your Logical Self decide that your life is worth eg. $10B. It's another thing for your Emotional Self to feel this. And it is yet another thing for your actual self to act on it.
I don't think I can help much in dealing with the difficulties in keeping these selves aligned. However, I do have a perspective I'd like to share that I think has some chance at being helpful.
Piggy bank
Imagine that you had a piggy bank with $21.47 of coins inside. Would you care a lot about it getting stolen? Nah.
Now imagine that you had a piggy bank with 21.47 million dollars inside of it. Now you'd be pretty protective of it huh? What if it was billion instead of million? Trillion instead of billion?
How protective you are of a thing depends on how valuable the thing is. If your piggy bank exploded in value, you'd become much more protective of it. Similarly, if your life exploded in value, you should become much more protective of it as well.
Assume the hypothetical
Here's another perspective. Assume that your life expectancy is 100k years instead of 80. With such a long time left to live, how would you feel about getting into a two ton metal object moving at 70mph at 8am by a guy who is still drowsy and didn't get enough sleep last night amongst dozens of others similar objects in the same situation?
Really try to assume that you would otherwise live 100k years. No illness is going to kill you until then. No war. No natural disaster. No existential risk. The only things that can kill you are things you have control over.
Would you feel good about getting in the car?