This is a special post for quick takes by Martin Randall. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
20 comments, sorted by Click to highlight new comments since:

Cryonics support is a cached thought?

Back in 2010 Yudkowsky wrote posts like Normal Cryonics that "If you can afford kids at all, you can afford to sign up your kids for cryonics, and if you don't, you are a lousy parent". Later, Yudkowsky's P(Doom) raised, and he became quieter about cryonics. In recent examples he claims that signing up for cryonics is better than immanentizing the eschaton. Valid.

I get the sense that some rationalists haven't made the update. If AI timelines are short and AI risk is high, cryonics is less attractive. It's still the correct choice under some preferences and beliefs, but I expected it to become rarer and for some people to publicly change their minds. If that happened, I missed it.

Good question!

Seems like you're right: If I run my script for calculating the costs & benefits of signing up for cryonics, but change the year for LEV to 2030, this indeed reduces the expected value to be negative for people of any age. Increasing the existential risk to 40% before 2035 doesn't change the value to be net-positive.

Assuming LEV happens in 2040 or 2050, does the expected value become net-positive or net-negative?

The output of the script tells the user at which age to sign up, so I'll report for which ages (and corresponding years) it's rational to sign up.

  • For LEV 2030, person is now 30 years old: Not rational to sign up at any point in time
  • For LEV 2040, person is now 30 years old: Rational to sign up in 11-15 years (i.e. age 41-45, or from 2036 to 2040, with the value of signing up being <$10k).
  • For LEV 2050, person is now 30 years old: Rational to sign up now and stay signed up until 2050, value is maximized by signing up in 13 years, when it yields ~$45k.

All of this is based on fairly conservative assumptions on how good the future will be, e.g. the value of a lifeyear in the future is assumed not to be greater than the value of a lifeyear in 2025 in a western country, and it's assumed that while aging will be eliminated, people will still die from accidents & suicide, driving the expected lifespan down to ~4k years. Additionally, I haven't changed the 5% probability of resuscitation based on the fact that TAI might be soon & fairly powerful.

While the object level calculation is central of course, I'd want to note that there's a symbolic value to cryonics. (Symbolic action is tricky, and I agree with not straightforwardly taking symbolic action for the sake of the symbolism, but anyway.) If we (broadly) were more committed to Life then maybe some preconditions for AGI researchers racing to destroy the world would be removed.

Check the comments Yudkowsky is responding to on Twitter:

Ok, I hear you, but I really want to live forever. And the way I see it is: Chances of AGI not killing us and helping us cure aging and disease: small. Chances of us curing aging and disease without AGI within our lifetime: even smaller.

And:

For every day AGI is delayed, there occurs an immense amount of pain and death that could have been prevented by AGI abundance. Anyone who unnecessarily delays AI progress has an enormous amount of blood on their hands.

Cryonics can have a symbolism of "I really want to live forever" or "every death is blood on our hands" that is very compatible with racing to AGI.

(I agree with all your disclaimers about symbolic action)

Good point... Still unsure, I suspect it would still tilt people toward not having the missing mood about AGI x-risk.

AI x-risk is high, which makes cryonics less attractive (because cryonics doesn't protect you from AI takeover-mediated human extinction). But on the flip side, timelines are short, which makes cryonics more attractive (because one of the major risks of cryonics is society persisting stably enough to keep you preserved until revival is possible, and near term AGI means that that period of time is short).

Cryonics is more likely to work, given a positive AI trajectory, and less likely to work given a negative AI trajectory. 

I agree that it seems less likely to work, overall, than it seemed to me a few years ago.

Makes sense. Short timelines mean faster societal changes and so less stability. But I could see factoring societal instability risk into time-based risk and tech-based risk. If so, short timelines are net positive for the question "I'm going to die tomorrow, should I get frozen?".

On the other hand, if you have shorter timelines and higher P Doom, the value of saving for retirement becomes much lower, which means that if you earn a income notably higher than your needs, the cost of cryonics is much lower, If you don't otherwise have valuable things to spend money on, they that get you value right now

This might hold for someone who is already retired. If not, both retirement and cryonics look lower value if there are short timelines and higher P(Doom). In this model, instead of redirecting retirement to cryonics it makes more sense to redirect retirement (and cryonics) to vacation/sabbatical and other things that have value in the present.

Idk, I personally feel near maxed out on spending money to increase my short term happiness (or at least, any ways coming to mind seem like a bunch of effort, like hiring a great personal assistant), and so the only reason to care about keeping it around is saving it for future use. I would totally be spending more money on myself now if I thought it would actually improve my life

I’m not trying to say that any of this applies in your case per se. But when someone in a leadership position hires a personal assistant, their goal may not necessarily be to increase their short term happiness, even if this is a side effect. The main benefit is to reduce load on their team.

If there isn’t a clear owner for ops adjacent stuff, people in high-performance environments will randomly pick up ad-hoc tasks that need to get done, sometimes without clearly reporting this out to anyone, which is often societally inefficient relative to their skillset and a bad allocation of bandwidth given the organization’s priorities.

A great personal assistant wouldn’t just help you get more done and focus on what matters, but also handle various things which may be spilling over to those who are paying attention to your needs and acting to ensure they are met without you noticing or explicitly delegating.

Oh sure, an executive assistant i.e. personal assistant in a work context can be super valuable just from an impact maximisation perspective but generally they need to be hired by your employer not by you in your personal capacity (unless you have a much more permissive/low security employer than Google)

I expected it to become rarer

Only a vanishingly small number of people sign up for cryonics - I think it would be just a few thousand people, out of the entirety of humanity. Even among Less Wrong rationalists, it's never been that common or prominent a topic I think? - perhaps because most of them are relatively young, so death feels far away. 

Overall, cryonics, like radical life extension in general, is one of the many possibilities of existence that the human race has neglected via indifference. It's popular as a science fiction theme but very few people choose to live it in reality. 

Because I think the self is possibly based on quantum entanglement among neurons, I am personally skeptical of certain cryonic paradigms, especially those based on digital reconstruction rather than physical reanimation. Nonetheless, I think that in a sane society with a developed economy, cryonic suspension would be a common and normal thing by now. Instead we have our insane and tragic world where people are so beaten down by life that, e.g. the idea of making radical rejuvenation a national health research priority sounds like complete fantasy. 

I sometimes blame myself as part of the problem, in that I knew about cryonics, transhumanism, etc., 35 years ago. And I had skills, I can write, I can speak in front of a crowd - yet what difference did I ever make? I did try a few times, but whether it's because I was underresourced, drawn to too many other purposes at once, insufficiently machiavellian for the real world of backstabbing competition, or because the psychological inertia of collective indifference is genuinely hard to move, I didn't even graduate to the world of pundit-influencers with books and websites and social media followers. Instead I'm just one more name in a few forum comment sections. 

Nonetheless, the human race has in the 2020s stumbled its way to a new era of technological promise, to the point that just an hour ago, the world's richest man was telling us all, on the social network that he owns, that he plans to have his AI-powered humanoid robots accompanying human expeditions to Mars a few years from now. And more broadly speaking, AI-driven cures for everything are part of the official sales pitch for AI now, along with rapid scientific and technological progress on every front, and leisure and self-actualization for all. 

So even if I personally feel left out and my potential contributions wasted, objectively, the prospects of success for cryonics and life extension and other such dreams is probably better than it's ever been - except for that little worry that "the future doesn't need us", and that AI might develop an agenda of its own that's orthogonal to the needs of the human race. 

Thank you for asking, Martin, the faster thing I use to get the general idea of how popular something is, is to use Google Trends. It looks like people search for Cryonics more or less like always. I think the idea makes sense, the more we save, the higher the probability to restore it better and earlier. I think we should also make a "Cryonic" copy of our whole planet, by making a digital copy, to at least back it up in this way. I wrote a lot about it recently (and about the thing I call "static place intelligence", the place of eventual all-knowing, that is completely non-agentic, we'll be the only agents there).

https://trends.google.com/trends/explore?date=all&q=Cryonics&hl=en

High expectation of x-risk and having lots to work on is why I have not been signed up for cryonics personally. I don't think it's a bad idea but has never risen up my personal stack of things worth spending 10s of hours on.

Bullying Awareness Week is a Coordination Point for kids to overthrow the classroom bully.

This makes it more productive than some other awareness weeks.

Calibration is for forecasters, not for proposed theories.

If a candidate theory is valuable then it must have some chance of being true, some chance of being false, and should be falsifiable. This means that, compared to a forecaster, its predictions should be "overconfident" and so not calibrated.