I'm writing to recommend something awesome to anyone who's recently signed up for cryonics (and to the future self of anyone who's about to do so). Robin Hanson has a longstanding offer that anyone who's newly signed up for cryonics can have an hour's discussion with him on any topic, and I took him up on that last week.

I expected to have a fascinating and wide-ranging discussion on various facets of futurism. My expectations were exceeded. Even if you've been reading Overcoming Bias for a long time, talking with Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.

So if you've recently signed up, email Robin; and if you're intending to sign up, let this be one more incentive to quit procrastinating!

Relevant links:

The LessWrong Wiki article on cryonics is a good place to start if you have a bunch of questions about the topic.

If you want to argue about whether signing up for cryonics is a good idea, two good and relatively recent threads on that subject are under the posts on A survey of anti-cryonics writing and More Cryonics Probability Estimates.

And if you are cryocrastinating (you've decided that you should sign up for cryonics, but you haven't yet), here's a LW thread about taking the first step.

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 9:13 AM

I don't think the following belonged in the OP, but it's worth saying:

Why was there such a difference for me between a conversation with RH and his more public outputs? My opinion is that he's very good at pointing out specific gaps in reasoning, which is extremely productive when it's your own reasoning. But when you're reading or watching Robin's exchange with someone else, it's all too tempting to think that he's picking nits and that the other person is just failing to respond in the correct way (i.e. the exact way that you'd respond, to which you don't see a counterargument from RH).

There are argumentative devices to circumvent this problem and make oneself more persuasive to an audience, but Robin doesn't seem to employ those as much as the norm.

My experience is exactly the opposite.

Thanks for the data point. If you want to give some more detail, that might be helpful.

This certainly accords with my experience. I didn't find his posts on FOOM persuasive, but after speaking to him in person I've shifted significantly towards the idea that his side of the debate is closer to the truth.

Was it a matter of him explaining points he had made publicly in a different way, or did he provide an entirely new approach when talking with you?

Also, I know a few people who are devastatingly persuasive in a one-on-one conversation, regardless of whether they are right, who can't necessarily write or publicly debate as well as they speak in a private, relaxed context. Maybe Hanson is more charismatic in person and so you are giving him more credit?

It's not the usual kind of charisma—I didn't feel a strong need to win his approval, relative to how much I do with other smart people. It's rather that he was extremely quick to understand my arguments and point out important aspects I hadn't considered, which makes it easier for me to consider that my argument might be flawed. So that's an aptitude, but it's one better correlated with good argument than the aptitude of charisma is.

I don't think he's publicly made the argument he made with me - it feels like until I spoke to him, I couldn't see a way that his broad "outside view" predictions could translate into any specific outcome you might predict with an inside view. Now I can see how it might work.

FWIW, while I've never talked to Robin in person, my experience with talking to Eliezer was pretty similar.

The very firs lesswrong meetup I ever went to (in Orange County) was attended by Yvain, Anna Salamon, and Luke (before he worked for whatever the institute is called these days). It was significantly more awesome than reading them in the blogs.

Well, while we're trading personal evaluations... when I met Yvain in person, I found him to be not quite as awesome as his writings. I suspect I come off the same way (although I have a good excuse).

I'd still really look forward to spending some time with you in a small group or alone. Maybe it is a kind-of-person thing but I have NEVER been disappointed meeting someone in person that I admire from printed word. At some level, I think I am at least as fascinated trying to understand what kind of person produces ideas like that, and the in person meetings are just chock full of information that I will never get no matter how much I read.

I strongly second this. I recently had the chance to have a drink with Robin and Katja Grace in London, and it is a candidate for the most interesting conversation I have had in my entire life.

Given that you've had 7 more years of life that have had more conversations with rationalists, would you say that that conversation is still a candidate for most interesting conversation?

[-][anonymous]11y150

Would you feel comfortable with sharing some of the things you talked about, and/or some of the topics you're now reconsidering? I think they might be pretty interesting.

We also talked about the relative likelihood of burning the cosmic commons, what would be required for a stable singleton in the future, mangled worlds and the Born probabilities, cryonics trusts and other incentives for revival, and some particulars of his projections about an em-driven world; but the topic that I'm most reconsidering afterward is the best approach to working on existential risk.

Essentially, Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.

Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario

Seems worth its own post from him or you, IMO.

(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)

Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it's an open question whether creating FAI is possible before other x-risks become critical.

That is, the kneejerk response has the same template as saying, "if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research". Some such arguments carry through on expected utility, while others don't; so I actually need to sit down and do my best reckoning.

[-][anonymous]11y50

Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like "Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)"

That viewpoint seems very different to MIRI's. I guess in practice there's less of a gap - Bostrom's writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that's a fundamental difference between MIRI and FHI or CSER.

Edit: Also, thank you for sharing, that sounds fascinating - in particular I've never come across 'mangled worlds', how interesting.

Having looked through the cryonics insurance options, I am having trouble justifying one vs a regular life insurance.

I think that an accident resulting in death makes cryo insurance a loss both to the insured and the estate, as the odds of both brain remaining intact and timely freezing are quite bad. So all you have left is the altruistic feeling of financing a cryo organization. If that's what you are after, donate explicitly.

If you get too demented or brain-damaged to handle your affairs, getting frozen is probably not a good idea anyway, since most of your personality is gone by then, and the odds of recovery are almost non-existent.

If you have a life insurance and get terminally ill, there are several ways to draw cash against the policy's value while you are still alive, and fund your cryosuspension that way.

If you want to guard against greedy relatives, (Rudi Hoffman's example), then drawing cash from your life insurance policy while still alive seems like a way to do it.

In summary, I am hard pressed to find a probable situation where a cryo insurance is preferable to a general whole life or universal life insurance, unless you have no one but yourself to care about. What am I missing here?

After the failure of the Cryonics Society of New York (CSNY, not to be confused with this or this), due in part to their acceptance of cases whose families promised to pay in installments but later reneged (causing them to run out of money for keeping their other patients cryopreserved), the remaining cryonics organizations require ironclad assurance of payment for suspension. That's really hard to arrange if you die without a few months' notice, even if you have an insurance policy, since your beneficiaries won't have the money to give to the organization for a few weeks or months after your death (for which time you'd be on dry ice, and undergoing a small but worrisome amount of degradation). Naming the organization as a beneficiary gives them 100% assurance that the suspension will be paid for, and without that they won't send out the suspension team.

(Someone correct me if I'm mistaken in this account.)

Cryonics insurance is regular life insurance. What makes it cryonics insurance is that the beneficiary is a cryonics organization. You can give instructions to your cryonics organization about what to do with excess funds (the entire amount if you are not preserved, you can also give instructions about under which conditions you should be preserved).

What makes it cryonics insurance is that the beneficiary is a cryonics organization.

Right, but my question is, why bother?

What am I missing here?

Think about who benefits from such a precommitment.

(Doesn't imply it's a scam, it's allowed to provide valuable services and to try to maximize income at the same time.)

What I'd really like is a YouTube video of Robin Hanson singing a particular Gilbert and Sullivan song. ;)

Yet everybody says I'm such a disagreeable man
And I can't think why!

Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.

Would you let him out of the box, were he an AI?

He's not?

AI DESTROYED. (Sorry, Robin.)