When I've brought up cryonics on LessWrong [1][2], most commenters have said I'm being too pessimistic.  When I brought it up yesterday at the Cambridge MA meetup, most people thought I was too optimistic.  (I think it could work, but there are enough things that could go wrong that it's ~1000:1 against.)  What makes the groups so different on this?

[1] Brain Preservation

[2] How Likely is Cryonics to Work

New Comment
33 comments, sorted by Click to highlight new comments since:

1) Sample size.

2) Anti-weird biases are likely to be stronger in person.

3) People who find cryonics attractive are more likely to learn more about it and to seek out discussions of it. In a small group environment, people instead just comment on whatever is being discussed. So most LW types may be uninterested in cryonics, either because they find it facially implausible or immortality unappealing, even while most LW types who actively discuss it online find it plausible and appealing.

In regards to sample size, have people seen similar things at other meetups? (2) and (3) would make me expect yes.

First, assume that the "LessWrong.com gestalt" (hereafter "LessWrong" had a collective positive bias early on.

Then, it's reasonable to assume LessWrong down-voted early pessimistic posts.

From there, it's reasonable to assume that the initial dissenters would have largely dropped the topic due to negative reinforcement - even if they're still pessimistic, they'll have concluded that pessimistic posts will just hurt their social standing and not change many minds.

As new people join, the initial dissenters fail to rally to pessimistic posts, but the optimists still rally to optimistic posts.

The net effect is to perpetuate that initial optimistic bias.

Finally: Your local meetup doesn't come with that optimistic bias pre-installed, so it can develop it's own bias! (at least, as soon as the members realize there's not that disincentive towards pessimistic posts.)

Yes - more generally, this is the difference between a community norm and the surveyed opinion of members of the community. Awareness of this can separate a community into core and non-core as the early tone-setters get defensive of what they see as the fundaments of the community, and this has the danger of evaporative cooling of group beliefs.

[-]Shmi120

Probably a version of the selection bias. The cryoskeptics could be unwilling to be drawn into a hostile online discussion, or even be wary of getting downvoted.

This seems like an information cascade, and hence like an epistemic tragedy if true. Can you think of things that are uniquely predicted by this theory, or an observation you could make which would falsify the theory if the observation came out in a particular way?

[-]Shmi10

Not sure what you are replying to. What information cascade? What theory? What tragedy? I'm lost.

(For reference, I upvoted you because it was an interesting insight and I was just trying to draw out more content. But since you offered three question marks to my single question mark it looks like maybe I should spell out what I meant explicitly...)

An information cascade occurs when evidence is counted twice. One person sees something and tells two other people. A fourth person hears the story from each without realizing it was the same story told second hand and counts it as having happened twice. A fifth person hears the second and fourth without realizing the backstory and counts it as happening three times. The first person hears about three events from the fifth person and decides that their original observation must happen all the time. A sixth person figures out the problem but is shouted down by the others because they've got a stake in the observation being common. This would be a tragedy, like something out of a play, having to do with knowledge. Hence an epistemic tragedy.

If cryoskeptics are afraid of voicing real objections for fear of downvoting, and thereby deprive people invested in or pondering investment in a cryo-policy (and I guess presuming that the skeptics are a source of signal rather than noise) then the cryo-investers should feel regret at this because they'd be losing the opportunity to escape a costly delusion. The cryo-investor's own voting would be causing them to live in a kind of bubble and it would imply that aggregate voting policies of the forum are harming the quality of content available on the site and making it a magnifier of confusion rather than a condenser for truth.

This would mean that lesswrong was just fundamentally broken on a subject many people consider to be really important and it would represent a severe indictment of the entire "lesswrong" project. If it was true, it would suggest that perhaps lesswrong should perhaps be fixed... or abandoned as a lost cause... or something? Like maybe voting patterns could be examined and automated weightings systems could fix things... or whatever.

However, a distinct hypothesis (made up on the spot for the sake of example) might be that cryoskeptics are just people who haven't thought about this stuff very much, who will tend to raise the same old objections that normally well up from specific ignorance plus background common knowledge, leaving the "advocates" to (for example) trot out the application of the Arrhenius equation all over again, educating one more person on one more component in the larger intellectual structure.

Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won't become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium. If this were true it would suggest that lesswrong voting habits aren't nearly as bad, they aren't really causing much of an information cascade but are instead detecting and promoting high quality comments. If this second hypothesis were true then if there was a real failure it would, perhaps(?) be a failure of active content generation and promotion aimed at filling in the predictable gaps in knowledge so new people don't have to pipe up and risk looking unknowledgeable in order to gain that knowledge.

So the thing I was asking was: Given the implicit indictment of LW if there is an information cascade in voting causing selection bias in content generation, can you think of an alternative hypothesis to test the theory, so we can figure out if there is something worth fixing, and if so what. Like maybe the people who express skepticism in meetups have less general scientific background education and tend to be emotionally sensitive, while people who express skepticism online have more education, more brashness, and are saying what they say for other idiosyncratic reasons.

Basically I was just asking: "What else does your interesting and potentially important hypothesis predict? Can it be falsified? How? Say more!" :-)

Spawning new groups may help. For example, the Center for Modern Rationality is unlikely to be quite so strongly seeded by transhumanist subcultural tropes, unless someone deliberately decides doing that would be a good idea.

[-]Shmi20

I see what you mean now, thanks.

However, a distinct hypothesis (made up on the spot for the sake of example) might be that cryoskeptics are just people who haven't thought about this stuff very much, who will tend to raise the same old objections that normally well up from specific ignorance plus background common knowledge, leaving the "advocates" to (for example) trot out the application of the Arrhenius equation all over again, educating one more person on one more component in the larger intellectual structure.

Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won't become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium.

This still looks like the same selection bias to me, cryoskeptics not speaking up online, though potentially for a different reason. I suppose that a well-constructed poll can help clarify the issue somewhat (people participating in a poll are a subject to some selection bias, but likely a different one).

1000:1 was considered over-optimistic? Oh come on. The odds of whole-brain digitization (a precursor to emulation, and good enough that it can be essentially considered a win on the survival front) being developed in 20 years are better than that, and that'd greatly attenuate the main failure mode of cryonics - the organizations failing before WBE is developed.

Maybe they were thinking of the odds of being brought back to life in that same cluster of matter?

People motivated to comment on LW cryonics posts are more likely to be cryonics optimists. People motivated to vote on LW cryonics posts are probably representative of LW as a whole. So you'll find a lot of upvoted pessimistic posts, but the comments overrepresent how optimistic is about LW. In person, there's only one conversation going on at a time, and you're more likely to get a representative sample of opinions, since a broader swath of LW has an incentive to weigh in.

"Pessimism" or "optimism" is a wrong way of describing the disagreement, there are important details missing in such broad a categorization, and if we don't agree on methodology of making the estimates, comparing the resulting numbers is useless. For example, How Likely is Cryonics to Work employed a mistaken independence assumption that makes the quantitative conclusion of the analysis meaningless (even as some of the intermediate steps are more informative).

"How Likely is Cryonics to Work" employed a mistaken independence assumption

It didn't:

All my probability guesses are conditional on everything above them not happening. For example, society collapsing and my cryonics organization going out of business are very much not independent. So the probability assigned to the latter is the chance that society won't collapse, but my organization goes out of business anyway.

There was some discussion of this on the post.

If you asked me about cryonics in person I'd tell you that I agree with your pessimism. But only because you'd ask me. Here on Less Wrong however I didn't do so until now because I just don't care at all about that topic. I don't care whether people sign up for cryonics or not.

Which doesn't explain why all those people who care about it are missing at meetups. The data of the latest survey suggests that there are more people that care about cryonics than not:

Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it.

But this doesn't mean that those who consider cryonics are optimistic about it. Maybe they are just more optimistic about it than to be rotting six feet under.

Maybe they are just more optimistic about it than to be rotting six feet under.

My feelings exactly.

More specific:

Average person cryonically frozen today will be successfully revived: [mean] 21.1 [percent], [quartiles] (1, 10, 30)

Compare people (1) in person, (2) on the post that has a Fermi calculation, and (3) on the post that does not have a Fermi calculation. Which people are more willing to engage with the structure or details of the calculation? Are they more optimistic or pessimistic?

Incidentally, I think the item "some law is passed that prohibits cryonics (before you're even dead)" is irrelevant and I think that there are several other items that are structurally similar. It is relevant to predicting the future, to whether you are actually preserved, but it is not relevant to the value of the option or your decision to take the option. Only if it occurs pretty much simultaneously with your death does it ruin your plans. Otherwise, it just takes away the future option of getting frozen in later years. If you have a fund of money planned for freezing and the option is removed, you haven't lost the money. It's more complicated if you use insurance rather than a lump sum to fund the freezing, but to first order, you lose nothing. Aside from the freezing cost, there are annual fees that you have paid up until the law; but those were not wasted: they were unexercised insurance, in case you died that year.

Do you have any information about the relative level of detail both groups went into in their arguments? There is little to go on here.

I'm an atheist, and believe that my mind can be seen as simply "software" running on my brain. However that "software" also believes that "I" is not just the software, but the brain and perhaps even the rest of the body.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me, just an illusion fooling outside observers. Same for mind uploads.

Do any other atheists feel the same way?

As to cryonics, that's obviously not quite the same a mind upload, but it feels like a greyish area, if the original cells are destroyed.

Another thing: if my world is just a simulation (even the NYT wrote about this theory), which I have no way of knowing, then cloning myself and killing the original is still suicide, with a very negative utility.

What do others think? I know that Kurzweil can't wait to upload his mind, and Goertzel wants multiple copies of himself to hedge his bets.

[-]Larks140

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me

It would be you as much as you are you of a second ago.

I had a hidden ugh-field about that one. It took quite a few repetitions of the Litany of Gendlin to grok it.

So, if I could copy you, then kill your old body (painlessly) and give the new body $20, would you take the offer? yes/no?

I reserve some uncertainty that I'm fundamentally wrong about how physics and anthropics work, so I'd treat the question the same as "how much money would I have to pay you to accept a ~10% chance of instant painless death"?

In other words, $20 definitely ain't going to cut it, but a hundred million would. (I have a hard time estimating exactly where the cutoff point would be.)

I'm confident, but not that confident.

So, if I could copy you, then kill your old body (painlessly) and give the new body $20, would you take the offer? yes/no?

Depends. Is the new body going to be here or 'up' in the ship. And is there a beam of some kind involved?

[-][anonymous]80

Do any other atheists feel the same way?

"Look at any photograph or work of art. If you could duplicate exactly the first tiny dot of color, and then the next and the next, you would end with a perfect copy of the whole, indistinguishable from the original in every way, including the so-called 'moral value' of the art itself. Nothing can transcend its smallest elements" - CEO Nwabudike Morgan, "The Ethics of Greed", Sid Meier's Alpha Centauri

[-][anonymous]20

If that hasn't yet been brought up as a quote in a rational quote thread it really should be.

He never did, but I plan to post a whole slew of Alpha Centauri quotes in next month's thread. There are so many good ones. I just started playing it again.

[-][anonymous]70

I briefly thought that way, thought about it more, and realized that view of identity was incoherent. There are lots of thought experiments you can pose in which that picture of identity produces ridiculous, up-physical, or contradictory results. I decided it was simpler to decide that it was the information, and not the meat, that dictated who I was. So long as the computational function being enacted is equivalent, it's me. Period. No if's, and's, or but's.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me, just an illusion fooling outside observers. Same for mind uploads.

Do any other atheists feel the same way?

Yes, many do. A part of me does. However I'm pretty sure that part of me is wrong (i.e. falling for an intuitive trap) because it doesn't make sense with my other, more powerful intuitions of identity.

For example, there is the manner in which I anticipate my decisions today impacting my actions tomorrow. This feels identity-critical, yet the effect they have would not be any different on a materially continuous future self than on a cloned or simulated future self.

As to cryonics, that's obviously not quite the same a mind upload, but it feels like a greyish area, if the original cells are destroyed.

The cells might be repaired instead of being destroyed and replaced. It depends on what is ultimately feasible / comes soonest in the tech tree. Many cryonicists have expressed a preference for this, some saying that uploading has equal value to death for them.

Also if we reach the point of perfect brain preservation in your lifetime it could be implanted into a cloned body (perhaps a patchwork of printed organs) without requiring repairs. This would be the least death-like version of cryonics short of actually keeping the entire body from experiencing damage.

Note that some cell loss and replacement is going on already in the ordinary course of biology. Presumably one of the future enhancements available would be to make your brain more solid-state so that you wouldn't be "dying and getting replaced" every few months.

Another thing: if my world is just a simulation (even the NYT wrote about this theory), which I have no way of knowing, then cloning myself and killing the original is still suicide, with a very negative utility.

I'm not sure I follow. If the world is a simulation, there are probably all kinds of copy-paste relationships between your past and future self-moments, this would just be one more to add to the pile.

However it is a good point that if you believe your identity is conserved in the original, and you want to survive and don't value the clone's life above your own, you should precommit not to kill the original if you should ever happen to wake up as the clone (you should kill yourself as the clone instead if it comes up as an either/or option).

But at the same time as you are anticipating this decision, you would be rejecting the notion that the clone is going to be really you, and the clone would also reject that it is really you.

I'm an atheist, and believe that my mind can be seen as simply "software" running on my brain. However that "software" also believes that "I" is not just the software, but the brain and perhaps even the rest of the body.

And also, among other things, software outside your brain (including in other brains). My brain might be the main part of my identity, but not its only part: other parts of it are in the rest of my body, in other people's brain, in my wardrobe, in my laptop's hard disk, in Facebook's server, in my university's database, in my wallet, etc. etc. etc.

Many of those things cryonics couldn't preserve.

"What makes the groups so different on this?"

First thing that comes to mind: Cambridge, MA is a city with one of the highest average IQ in the world.