Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription

32 Post author: ChrisHallquist 11 January 2014 10:39AM

Background:

On the most recent LessWrong readership survey, I assigned a probability of 0.30 on the cryonics question. I had previously been persuaded to sign up for cryonics by reading the sequences, but this thread and particularly this comment lowered my estimate of the chances of cryonics working considerably. Also relevant from the same thread was ciphergoth's comment:

By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.

Based on this, I think there's a substantial chance that there's information out there that would convince me that the folks who dismiss cryonics as pseudoscience are essentially correct, that the right answer to the survey question was epsilon. I've seen what seem like convincing objections to cryonics, and it seems possible that an expanded version of those arguments, with full references and replies to pro-cryonics arguments, would convince me. Or someone could just go to the trouble of showing that a large majority of cryobiologists really do think cryopreserved people are information-theoretically dead.

However, it's not clear to me how well worth my time it is to seek out such information. It seems coming up with decisive information would be hard, especially since e.g. ciphergoth has put a lot of energy into trying to figure out what the experts think about cryonics and come away without a clear answer. And part of the reason I signed up for cryonics in the first place is because it doesn't cost me much: the largest component is the life insurance for funding, only $50 / month.

So I've decided to put a bounty on being persuaded to cancel my cryonics subscription. If no one succeeds in convincing me, it costs me nothing, and if someone does succeed in convincing me the cost is less than the cost of being signed up for cryonics for a year. And yes, I'm aware that providing one-sided financial incentives like this requires me to take the fact that I've done this into account when evaluating anti-cryonics arguments, and apply extra scrutiny to them.

Note that there are several issues that ultimately go in to whether you should sign up for cryonics (the neuroscience / evaluation of current technology, estimate of the probability of a "good" future, various philosophical issues), I anticipate the greatest chance of being persuaded from scientific arguments. In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues. The offer is blind to the exact nature of the arguments given, but I mostly foresee being persuaded by the neuroscience arguments.

And of course, I'm happy to listen to people tell me why the anti-cryonics arguments are wrong and I should stay signed up for cryonics. There's just no prize for doing so.

Comments (181)

Comment author: V_V 11 January 2014 04:22:17PM *  29 points [-]

Cryonics success is an highly conjunctive event, depending on a number of different, roughly independent, events to happen.

Consider this list:

  • The cryorpreservation process as performed by current cryo companies, when executed perfectly, preserves enough information to reconstruct your personal identity. Neurobiologists and cryobiologists generally believe this is improbable, for the reasons explained in the links you cited.
  • Cryocompanies actually implement the cryorpreservation process susbstantially as advertised, without botching or faking it, or generally behaving incompetently. I think there is a significant (>= 50%) probability that they don't: there have been anecdotal allegations of mis-behavior, at least one company (the Cryonics Institute) has policies that betray gross incompetence or disregard for the success of the procedure ( such as keeping certain cryopatients on dry ice for two weeks ), and more generally, since cryocompanies operate without public oversight and without any mean to assess the quality of their work, they have every incentive to hide mistakes, take cost-saving shortcuts, use sub-par materials, equipment, unqualified staff, or even outright defraud you.

  • Assuming that the process has actually preserved the relevant information, technology for recover it and revive you in some way must be developed. Guessing about future technology is difficult. Historically, predicted technological advances that seemed quite obvious at some point (AGI, nuclear fusion power, space colonization, or even flying cars and jetpacks) failed to materialize, while actual technological improvements were often not widely predicted many years in advance (personal computers, cellphones, the Internet, etc.). The probability that technology many years from now goes along a trajectory we can predict is low.

  • Assuming that the tech is eventually developed, it must be sufficiently cheap, and future people must have an incentive to use it to revive you. It's unclear what such an incentive could be. Revival of a few people for scientific purposes, even at a considerable cost, seems plausible, but mass revival of >thousands frozen primitives?

  • Your cryocompany must not suffer financial failure, or some other significant local disruption, before the tech becomes available and economically affordable. Very few organizations survive more than one century, and those which do, often radically alter their mission. Even worse, it is plausible that before revival tech becomes available, radical life extension becomes available, and therefore people stop signing up for cryonics. Cryocompanies might be required to go on for many decades or centuries without new customers. It's unclear that they could remain financially viable and motivated in this condition. The further in the future revival tech becomes available, the lower the chances that your cryocompany will still exist.

  • Regional or planetary disasters, either natural (earthquake, flood, hurricane, volcanic eruption, asteroid strike, etc.) or human-made (war, economic crisis, demographic crisis due to environmental collapse, etc.) must not disrupt your preservation. Some of these disaster are exceptional, other hit with a certain regularity over the course of a few centuries. Again, the further in the future revival tech becomes available, the lower the chances that a disaster will destroy your frozen remains before.

You can play with assigning probabilities to these events and multiplying them. I don't recommend trusting too much any such estimate due to the fact that it is easy to fool yourself into a sense of false precision while picking numbers that suit whatever you already wanted to believe.
But the takeaway point is that in order to cryonics to succeed, many things have to happen or be true in succession, and the failure of only one of them would make cryonics ultimately fail at reviving you. Therefore, I think, cryonics success is so improbable that it is not worth the cost.

Comment author: gothgirl420666 11 January 2014 07:01:19PM 22 points [-]

You forgot "You will die in a way that keeps your brain intact and allows you to be cryopreserved".

Comment author: Mark_Friedenbach 11 January 2014 07:11:48PM *  17 points [-]

"... by an expert team with specialized equipment within hours (minutes?) of your death."

Comment author: jaibot 13 January 2014 01:52:08PM *  2 points [-]

"...a death which left you with a functional-enough circulatory system for cryoprotectants to get to your brain, didn't involve major cranial trauma, and didn't leave you exposed to extreme heat or other conditions which could irretrievably destroy large amounts of brain information. Also the 'expert' team, which probably consists of hobbyists or technicians who have done this at best a few times and with informal training, does everything right."

(This is not meant as a knock against the expert teams in question, but against civilization for not making an effort to get something better together. The people involved seem to be doing the best they can with the resources they have.)

Comment author: khafra 13 January 2014 08:07:49PM 5 points [-]

...Which pretty much rules out anything but death from chronic disease; which mostly happens when you get quite old; which means funding your cryo with term insurance is useless and you need to spring for the much more expense whole life.

Comment author: gjm 11 January 2014 08:15:01PM 6 points [-]

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

Comment author: CarlShulman 11 January 2014 09:20:58PM *  10 points [-]

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

Comment author: NoSuchPlace 11 January 2014 11:43:46PM *  2 points [-]

I don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference:

Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared.

Edit: Fixed silly typo.

Comment author: CarlShulman 12 January 2014 01:31:53AM *  9 points [-]

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

  • Life insurance, and cryonics membership fees, are paid on an annual basis
  • The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
  • You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
  • If you die in a year before 'immortality' becomes available, then it does not help you

So, in your scenario:

  • You have a 10% chance of dying before 40 years have passed
  • During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because of membership fees not being scaled to mortality risk)
  • After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
  • In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10
Comment author: NoSuchPlace 12 January 2014 02:31:19AM 7 points [-]

True. While the effect would still exist due to front-loading it would be smaller than I assumed . Thank you for pointing this out to me.

Comment author: private_messaging 12 January 2014 10:56:55AM *  3 points [-]

Except people do usually compare the spending on the insurance which takes low probability of need into account, to the benefits of cryonics that are calculated without taking the probability of need into account.

The issue is that it is not "cryonics or nothing". There's many possible actions. For example you can put money or time into better healthcare, to have a better chance of surviving until better brain preservation (at which point you may re-decide and sign up for it).

The probability of cryonics actually working is, frankly, negligible - you can not expect people to do something like this right without any testing, even if the general approach is right and it is workable in principle*. (Especially not in the alternative universe where people are crazy and you're one of the very few sane ones), and is easily out-weighted even by minor improvements in your general health. Go subscribe to a gym, for a young person offering $500 for changing his mind that'll probably blow cryonics out of water by orders of magnitude, cost benefit wise. Already subscribed to a gym? Work on other personal risks.

  • I'm assuming that cryonics proponents do agree that some level of damage - cryonics too late, for example - would result in information loss that likely can not be recovered even in principle.
Comment author: Adele_L 12 January 2014 12:53:14AM 0 points [-]

When immortality is at stake, a 91% chance is much much better than a 90% chance.

Comment author: private_messaging 12 January 2014 11:04:51AM *  1 point [-]

Not if that 1% (seems way over optimistic to me) is more expensive than other ways to gain 1% , such as by spending money or time on better health. Really, you guys are way over-awed by the multiplication of made up probabilities by made up benefits, forgetting that all you did was making an utterly lopsided, extremely biased pros and cons list, which is a far cry from actually finding the optimum action.

Comment author: Dentin 12 January 2014 02:26:17PM 2 points [-]

I signed up for cryonics precisely because I'm effectively out of lower cost options, and most of the other cryonicists are in a similar situation.

Comment author: private_messaging 12 January 2014 09:33:32PM 1 point [-]

I wonder how good of an idea is a yearly full body MRI for early cancer detection...

Comment author: CellBioGuy 13 January 2014 12:53:41AM 1 point [-]

There are those that argue that it's more likely to find something benign you've always had and wouldn't hurt you but you never knew about, seeing as we all have weird things in us, leading to unnecessary treatments which have risks.

Comment author: private_messaging 13 January 2014 08:10:56PM 4 points [-]

What's about growing weird things?

Here we very often use ultrasound (and the ultrasound is done by the medical doctor rather than by a technician), it finds weird things very very well and the solution is simply to follow up later and see if its growing.

Comment author: bogus 13 January 2014 03:07:39AM 0 points [-]

There are those that argue that it's more likely to find something benign you've always had

This can only decrease the amount of useful information you'd get from the MRI, though - it can't convert a benefit into a cost. After all, if the MRI doesn't show more than the expected amount of weirdness, you should avoid costly treatments.

Comment author: [deleted] 12 January 2014 10:19:06AM 1 point [-]

but after cryonics companies disappeared.

ITYM “before”.

Comment author: Zaine 12 January 2014 12:25:23AM *  2 points [-]

To keep the information all in one place, I'll reply here.

Cryogenic preservation exists in the proof of tardigrades - also called waterbears - which can reanimate from temperatures as low as 0.15 K, and have sufficient neurophysiological complexity to enable analysis of neuronal structural damage.

We don't know if the identity of a given waterbear pre-cyrobiosis is preserved post-reanimation. For that we'd need a more complex organism. However, the waterbear is idiosyncratic in its capacity for preservation; while it proves the possibility for cyrogenic preservation exists, we ourselves do not have the traits of the waterbear that facilitate its capacity for preservation.

In the human brain, there are billions of synapses - to what neurones other neurones connect, we call the connectome: this informs who you are. According to our current theoretical and practical understanding of how memories work, if synapses degrade even the slightest amount your connectome will change dramatically, and will thus represent a different person - perhaps even a lesser human (fewer memories, etcetera).

Now, let's assume uploading becomes commonplace and you mainly care about preserving your genetic self rather than your developed self (you without most of your memories and different thought processes vs. the person you've endeavoured to become), so any synaptic degradation of subsistence brain areas becomes irrelevant. What will the computer upload? Into what kind of person will your synapses reorganise? Even assuming they will reorganise might ask too much of the hypothetical.

Ask yourself who - or what - you would like to cyropreserve; the more particular your answer, the more science needed to accommodate the possibility.

Comment author: Mark_Friedenbach 12 January 2014 06:54:25AM 1 point [-]

We don't know if the identity of a given waterbear pre-cyrobiosis is preserved post-reanimation. For that we'd need a more complex organism.

How would you design that experiment? I would think all you'd need is a better understanding of what identity is. But maybe we mean different things by identity.

Comment author: Zaine 12 January 2014 07:50:44AM *  0 points [-]

We'd need to have a means of differentiating the subject waterbear's behaviour from other waterbears; while not exhaustive, classically conditioning a modified reflexive reaction to stimuli (desensitisation, sensitisation) or inducing LTP or LTD on a synapse, then testing whether the adaptations were retained post-reanimation, would be a starting point.

The problem comes when you try to extrapolate success in the above experiment to mean potential for more complex organisms to survive the same procedure given x. Ideally you would image all of the subjects synapses pre-freeze or pre-cryobiosis (depending on what x turns out to be), then image them again post-reanimation, and have a program search for discrepancies. Unfortunately, the closest we are to whole-brain imaging is neuronal fluorescence imaging, which doesn't light up every synapse. Perhaps it might if we use transcranial DC or magnetic stimulation to activate every cell in the brain; doing so may explode a bunch of cells, too. I've just about bent over the conjecture tree by this point.

Comment author: Mark_Friedenbach 12 January 2014 08:27:09AM 0 points [-]

Does the waterbear experience verification and then wake up again after being thawed, or does subjective experience terminate with vitrification - subjective experience of death / oblivion - and a new waterbear with identical memories begin living?

Comment author: Zaine 13 January 2014 12:05:04AM *  0 points [-]

We need to stop and (biologically) define life and death for a moment. A human can be cryogenically frozen before or after their brain shuts down; in either case, their metabolism will cease all function. This is typically a criterion of death. However if, when reanimated, the human carries on as they would from a wee kip, does this mean they have begun a new life? resumed their old life after a sojourn to the Underworld?

You see the quandary our scenario puts to this definition of life, for the waterbear does the exact above. They will suspend their metabolism, which can be considered death, reanimate when harsh environmental conditions subside, and go about their waterbearing ways. Again, do the waterbears live a subset of multiple lives within the set of one life? Quite confusing to think about, yes?

Now let's redefine life.

A waterbear ceases all metabolic activity, resumes it, then lumbers away. In sleep, one's state pre- and post-sleep will differ; one wakes up with changed neuronal connections, yet considers themselves the same person - or not, but let's presume they do. Take, then, the scenario in which one's state pre- and post-sleep does not differ; indeed, neurophysiologically speaking, it appears they've merely paused then recommenced their brain's processes, just as the time 1:31:00 follows 1:30:59.

This suggests that biological life depends not on metabolic function, but on the presence of an organised system of (metabolic) processes. If the system maintains a pristine state, then it matters not how much time has passed since it last operated; the life of the system's organism will end only when when that system becomes so corrupted as to lose the capacity for function. Sufficient corruption might amount to one specalated synapse; it might amount to a missing ganglion. Thus cyrogenics' knottiness.

As to whether they experience verification, you'll have to query a waterbear yourself. More seriously, for any questions on waterbear experience I refer you to a waterbear, or a waterbear philosopher. As to whether and to what degree they experience sensation when undergoing cryptobiosis, we can test to find out, but any results will be interpreted through layers of extrapolation: "Ganglion A was observed inhibiting Ganglion B via neurotransmitter D binding postsynaptic alpha receptors upon tickling the watebear's belly; based on the conclusions of Researchers et. al., this suggests the waterbear experienced either mildly positive or extremely negative sensation."

Comment author: Benquo 13 January 2014 07:13:07PM 0 points [-]

I think the question was a practical one and "verification" should have been "vitrification."

Comment author: Zaine 13 January 2014 07:59:30PM *  0 points [-]

I considered that, but the words seemed too different to result from a typo; I'm interested to learn the fact of the matter.

I've edited the grandparent to accommodate your interpretation.

Comment author: adbge 12 January 2014 06:36:52PM *  0 points [-]

Going under anesthesia is a similar discontinuity in subjective experience, along with sleep, situations where people are technically dead for a few moments and then brought back to life, coma patients, and so on.

I don't personally regard any of these as the death of one person followed by the resurrection of a new person with identical memories, so I also reject the sort of reasoning that says cryogenic resurrection, mind uploading, and Star Trek-style transportation is death.

Eliezer has a post here about similar concerns. It's perhaps of interest to note that the PhilPapers survey revealed a fairly even split on the teletransporter problem among philosophers, with the breakdown being 36.2%/32.7%/31.1% as survive/other/die respectively.

ETA: Ah, nevermind, I see you've already considered this.

Comment author: Mark_Friedenbach 12 January 2014 07:33:58PM 2 points [-]

Yes, that post still reflects my views. I should point out again that sleep and many forms of anesthesia don't stop operation of the brain, they just halt the creation of new memories so people don't remember. That's why, for example, some surgery patients end up with PTSD from waking up on the table, even if they don't remember.

Other cases like temporary (clinical) death and revival also aren't useful comparisons. Even if the body is dying, the heart and breathing stops, etc., there are still neural computations going on from which identity is derived. The irrecoverable disassociation of the particle interactions underlying consciousness probably takes a while - hours or more, unless there is violent physical damage to the brain. Eventually the brain state fully reverts to random interactions and identity is destroyed, but clinical revival becomes impossible well before then.

Cryonics is more of a weird edge case ... we don't know enough now to say with any certainty whether cryonics patients have crossed that red line or not with respect to destruction of identity.

Comment author: ChrisHallquist 13 January 2014 02:46:31AM *  1 point [-]

Most of these issues I was already aware of, though I did have a brief "holy crap" moment when I read this parenthetical statement:

such as keeping certain cryopatients on dry ice for two weeks

But following the links to the explanation, I don't think this impacts considerably my estimate of CI's competence / trustworthiness. This specific issue only affects people who didn't sign up for cryonics in advance, comes with an understandable (if not correct) rationale, and comes with acknowledgement that it's less likely to work than the approach they use for people who were signed up for cryonics before their deaths.

Their position may not be entirely rational, but I didn't previously have any illusions about cryonics organizations being entirely rational (it seems to me cryonics literature has too much emphasis on the possibility of reviving the original meat as opposed to uploading.)

Comment author: V_V 13 January 2014 04:10:06PM *  1 point [-]

But following the links to the explanation, I don't think this impacts considerably my estimate of CI's competence / trustworthiness. This specific issue only affects people who didn't sign up for cryonics in advance, comes with an understandable (if not correct) rationale, and comes with acknowledgement that it's less likely to work than the approach they use for people who were signed up for cryonics before their deaths.

"less likely to work" seems a bit of an euphemism. I think that the chances that this works are essentially negligible even if cryopreservation under best condition did work (which is already unlikely).

My point is that even if they don't apply this procedure to all their patients, the fact that CI are offering it means that they are either interested in maximizing profit instead of success probability, and/or they don't know what they are doing, which is consistent with some claims by Mike Darwin (who, however, might have had an axe to grind).

Signing up for cryonics is always buying a pig in a poke because you have no way of directly evaluating the quality of the provider work within your lifetime, therefore the reputation of the provider is paramount. If the provider behaves in a way which is consistent with greed or incompetence, it is an extremely bad sign.

Comment author: ChrisHallquist 13 January 2014 09:17:37PM 2 points [-]

Read a bit of Mike Darwin's complaints, those look more serious. I will have to look into that further. Can you give me a better sense of your true (not just lower bound) estimate of the chances there's something wrong with cryonics orgs on an institutional level that would lead to inadequate preservation even if in theory they had a working procedure in theory?

Comment author: V_V 13 January 2014 10:27:15PM 0 points [-]

I'm not sure how to condense my informal intuition into a single number. I would say > 0.5 and < 0.9, closer to the upper bound (and even closer for the Cryonics Institute than for Alcor).

Comment author: Gunnar_Zarncke 12 January 2014 08:24:41PM 1 point [-]

For a formula see http://www.alcor.org/Library/html/WillCryonicsWork.html (I do find the given probabilities significantly to optimistic though and lacking and references).

Comment author: MugaSofer 17 January 2014 02:07:26AM *  -1 points [-]

I think there is a significant (>= 50%) probability that they don't: there have been anecdotal allegations of mis-behavior, at least one company (the Cryonics Institute) has policies that betray gross incompetence or disregard for the success of the procedure ( such as keeping certain cryopatients on dry ice for two weeks ), and more generally, since cryocompanies operate without public oversight and without any mean to assess the quality of their work, they have every incentive to hide mistakes, take cost-saving shortcuts, use sub-par materials, equipment, unqualified staff, or even outright defraud you.

Woah, really? This seems ... somewhat worse than my estimation. (Note that I am not signed up, for reasons that have nothing to do with this.)

it is plausible that before revival tech becomes available, radical life extension becomes available, and therefore people stop signing up for cryonics. Cryocompanies might be required to go on for many decades or centuries without new customers. It's unclear that they could remain financially viable and motivated in this condition.

This is a good point that I hadn't heard before.

Comment author: handoflixue 19 January 2014 09:26:35AM 0 points [-]

http://www.alcor.org/cases.html A loooot of them include things going wrong, pretty clear signs that this is a novice operation with minimal experience, and so forth. Also notice that they don't even HAVE case reports for half the patients admitted prior to ~2008.

It's worth noting that pretty much all of these have a delay of at LEAST a day. There's one example where they "cryopreserved" someone who had been buried for over a year, against the wishes of the family, because "that is what the member requested." (It even includes notes that they don't expect it to work, but the family is still $50K poorer!)

I'm not saying they're horrible, but they really come off as enthusiastic amateurs, NOT professionals. Cryonics might work, but the modern approach is ... shoddy at best, and really doesn't strike me as matching the optimistic assumptions of people who advocate for it.

Comment author: MugaSofer 20 January 2014 08:48:08PM -1 points [-]

Yikes. Yeah, that seems like a serious problem that needs more publicity in cryonics circles.

Comment author: V_V 18 January 2014 11:43:03AM 0 points [-]

I think it's also worth considering that a society of people who rarely die would probably have population issues, as there is a limited carrying capacity.
That's most obvious in the case of biologic humans, where even with our normal lifespan, we are already close or even above carrying capacity. In more exotic (and thus less probable, IMHO) scenarios such as Hansonian brain emulations, the carrying capacity might be perhaps higher, but it would still be fixed, or at least it would increase slowly once all the easily reachable resources on earth have been put to use (barring, of course, extreme singularity scenarios where nanomagicbots turn Jupiter into "computronium" or something, which I consider highly improbable).

Thus, if the long-lived future people are to avoid continuous cycles of population overshoot and crash, they must have some way of enforcing a population cap, whether by market forces or government regulation. This implies that reviving cryopreserved people would probably have costs other than those of the revival tech. Whoever revives you would have to split in some way their share of resources with you (or maybe in the extreme case, commit suicide to make room for you).
Hanson, for instance, predicts that his brain emulation society would be a Malthusian subsistence economy. I don't think that such a society could afford to ever revive any significant number of cryopatients, even if they had the technology (how Hanson can believe that society is likely and be still signed up for cryonics, is beyond my understanding).
Even if you don't think that a Malthusian scenario is likely, it still likely that the future will be an approximately steady-state economy, which means it would be strong disincentives against adding more people.

Comment author: MugaSofer 18 January 2014 03:03:58PM *  -2 points [-]

Even if you don't think that a Malthusian scenario is likely, it still likely that the future will be an approximately steady-state economy, which means it would be strong disincentives against adding more people.

I'm inclined to agree, actually, but I would expect a post-scarcity "steady-state economy" large enough that absorbing such a tiny number of people is negligible.

With that said:

  • Honestly, it doesn't sound all that implausible that humans will find ways to expand - if nothing else, without FTL (I infer you don't anticipate FTL) there's pretty much always going to be a lot of unused universe out there for many billions of years to come (until the universe expands enough we can't reach anything, I guess.)

  • Brain emulations sound extremely plausible. In fact, the notion that we will never get them seems ... somewhat artificial in it's constraints. Are you sure you aren't penalizing them merely for sounding "exotic"?

  • I can't really comment on turning Jupiter into processing substrate and living there, but ... could you maybe throw out some numbers regarding the amounts of processing power and population numbers you're imagining? I think I have a higher credence for "extreme singularity scenarios" than you do, so I'd like to know where you're coming from better.

Hanson, for instance, predicts that his brain emulation society would be a Malthusian subsistence economy. I don't think that such a society could afford to ever revive any significant number of cryopatients, even if they had the technology (how Hanson can believe that society is likely and be still signed up for cryonics, is beyond my understanding).

That ... is strange. Actually, has he talked anywhere about his views on cryonics?

Comment author: V_V 18 January 2014 10:39:13PM *  1 point [-]

Honestly, it doesn't sound all that implausible that humans will find ways to expand - if nothing else, without FTL (I infer you don't anticipate FTL)

Obviously I don't anticipate FTL. Do you?

there's pretty much always going to be a lot of unused universe out there for many billions of years to come (until the universe expands enough we can't reach anything, I guess.)

Yes, but exploiting resources in our solar system is already difficult and costly. Currently there is nothing in space worth the cost of going there or bringing it back, maybe in the future it will be different, but I expect progress to be relatively slow.
Interstellar colonization might be forever physically impossible or economically unfeasible. Even if it is feasible I expect it to be very very slow. I think that's the best solution to Fermi's paradox.

Tom Murphy discussed these issue here and here. He focused on proven space technology (rockets) and didn't analyze more speculative stuff like mass drivers, but it seems to me that his whole analysis is reasonable.

Brain emulations sound extremely plausible. In fact, the notion that we will never get them seems ... somewhat artificial in it's constraints. Are you sure you aren't penalizing them merely for sounding "exotic"?

I'm penalizing them because they seem to be far away from what current technology allows (consider the current status of the Blue Brain Project or the Human Brain Project).
It's unclear how many hidden hurdles are there, and how long Moore's law will continue to hold. Even if the emulation of a few human brains becomes possible, it's unclear that the technology would scale to allow a population of billions, or trillions as Hanson predicts. Keep in mind that biological brains are much more energy efficient than modern computers.

Conditionally on radical life extension technology being available, brain emulation is more probable, since it seems to be an obvious avenue to radical life extension. But it's not obvious that it would be cheap and scalable.

I can't really comment on turning Jupiter into processing substrate and living there, but ... could you maybe throw out some numbers regarding the amounts of processing power and population numbers you're imagining? I think I have a higher credence for "extreme singularity scenarios" than you do, so I'd like to know where you're coming from better.

I think the most likely scenario, at least for a few centuries, is that human will still be essentially biological and will only inhabit the Earth (except possibly for a few Earth-dependent outposts in the solar system). Realistic population sizes will be between 2 and 10 billions.

Total processing power is more difficult to estimate: it depends on how long Moore's law (and related trends such as Koomey's law) will continue to hold. Since there seem to be physical limits that would be hit in 30-40 years of continued exponential growth, I would estimate that 20 years is a realistic time frame. Then there is the question of how much energy and other resources people will invest into computation.
I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable. Novel hardware technologies might change that, but as usual probabilities on speculative future tech should be discounted.

Comment author: private_messaging 28 January 2014 12:26:44AM *  0 points [-]

I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable

From Wikipedia:

Although this trend has continued for more than half a century, Moore's law should be considered an observation or conjecture and not a physical or natural law. Sources in 2005 expected it to continue until at least 2015 or 2020.[note 1][11] However, the 2010 update to the International Technology Roadmap for Semiconductors predicts that growth will slow at the end of 2013,[12] when transistor counts and densities are to double only every three years.

It's already happening.

Current process size is ~22nm, silicon lattice size is ~0.5nm . Something around 5..10 nm is the limit for photolithography, and we don't have any other methods of bulk manufacturing in sight. The problem with individual atoms is that you can't place them in bulk because of the stochastic nature of the interactions.

Comment author: MugaSofer 20 January 2014 09:23:37PM *  0 points [-]

I don't anticipate FTL.

Prediction confirmed, then. I think you might be surprised how common anticipating that we will eventually "solve FTL" using "wormholes", some sort of Alcubierre variant or plain old Clarke-esque New Discoveries - in sciencey circles, anyway.

I'm penalizing them because they seem to be far away from what current technology allows

I ... see. OK then.

Keep in mind that biological brains are much more energy efficient than modern computers.

That seems like a more plausible objection.

Total processing power is more difficult to estimate: it depends on how long Moore's law (and related trends such as Koomey's law) will continue to hold. Since there seem to be physical limits that would be hit in 30-40 years of continued exponential growth, I would estimate that 20 years is a realistic time frame. Then there is the question of how much energy and other resources people will invest into computation. I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable. Novel hardware technologies might change that, but as usual probabilities on speculative future tech should be discounted.

Hmm. I started to calculate out some stuff, but I just realized: all that really matters is how the amount of humans we can support compares to available human-supporting resources, be they virtual, biological or, I don't know, some sort of posthuman cyborg.

So: how on earth can we calculate this?

We could use population projections - I understand the projected peak is around 2100 at 9 billion or so - but those are infamously unhelpful for futurists and, obviously, may not hold when some technology or another is introduced.

So ... what about wildly irresponsible economic speculation? What's your opinion of the idea we'll end up in a "post-scarcity economy", due to widespread automation etc.

Alternatively, do you think the population controls malthusians have been predicting since forever will finally materialize?

Or ... basically I'm curious as to the sociological landscape you anticipate here.

Comment author: V_V 23 January 2014 05:50:59PM *  0 points [-]

So ... what about wildly irresponsible economic speculation? What's your opinion of the idea we'll end up in a "post-scarcity economy", due to widespread automation etc.

As long as we are talking about biologic humans (I don't think anything else is likely, at least for a few centuries), then carrying capacity is most likely in the order of billions: each human requires a certain amount of food, water, clothing, housing, healthcare, etc. The technologies we use to provide these things are already highly efficient, hence their efficiency will probably not grow much, at least not by incremental improvement.
Groundbreaking developments comparable to the invention of agriculture might make a difference, but there doesn't seem to be any obvious candidate for that which we can foresee, hence I wouldn't consider that likely.

In optimistic scenarios, we get an approximately steady state (or slowly growing) economy with high per capita wealth, with high automation relieving many people from the necessity of working long hours, or perhaps even of working at all.
In pessimistic scenarios, Malthusian predictions come true, and we get either steady state economy at subsistence level, or growth-collapse oscillations with permanent destruction of carrying capacity due to resource depletion, climate change, nuclear war, etc. up to the most extreme scenarios of total civilization breakdown or human extinction.

Comment author: Lumifer 23 January 2014 06:32:21PM 3 points [-]

The technologies we use to provide these things are already highly efficient

This is certainly not true for healthcare.

Groundbreaking developments comparable to the invention of agriculture might make a difference, but there doesn't seem to be any obvious candidate for that which we can foresee

I think that making energy really cheap ("too cheap to meter") is foreseeable and that would count as a groundbreaking development.

Comment author: V_V 23 January 2014 08:44:28PM 0 points [-]

This is certainly not true for healthcare.

Do you think that modern healthcare is inefficient in energy and resource usage? Why?

I think that making energy really cheap ("too cheap to meter") is foreseeable and that would count as a groundbreaking development.

What energy source you have in mind?

Comment author: Lumifer 23 January 2014 08:53:38PM 2 points [-]

Do you think that modern healthcare is inefficient in energy and resource usage? Why?

I think that modern healthcare is inefficient in general cost/benefit terms: what outputs you get at the cost of which inputs. Compared to what seems achievable in the future, of course.

What energy source you have in mind?

Fusion reactors, for example.

Comment author: NancyLebovitz 11 January 2014 09:45:25PM 8 points [-]

Supposing that you get convinced that a cryonics subscription isn't worth having for you.

What's the likelihood that it's just one person offering a definitive argument rather than a collaborative effect? If the latter, will you divide the $500?

Comment author: ChrisHallquist 13 January 2014 02:54:09AM *  0 points [-]

Good question, should have answered it in the OP. The answer is possibly, but I anticipate a disproportionate share of the contribution coming from one person, someone like kalla724, and in that case it goes to that one person. But definitely not divided between the contributors to an entire LW thread.

Comment author: JRMayne 11 January 2014 10:34:33PM *  19 points [-]

I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)

If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)

The global warming issue has better lists of people signing off, including one genuinely credible human: Richard Lindzen of MIT. Lindzen, though, has oscillated from "manmade global warming is a myth," to a more measured view that the degree of manmade global warming is much, much lower than the general view. The list of signatories to a global warming skeptic letter contains some people with some qualifications on the matter, but many who do not seem to have expertise.

Cryonics? Well, there's this. Assuming they would put any neuroscience qualifications that the signatories had... this looks like the intelligent design letters. Electrical engineers, physicists... let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics:

  1. Kenneth Hayworth, a post-doc now at Harvard.

  2. Ravin Jain, Los Angeles neurologist. He was listed as an assistant professor of neurology at UCLA in 2004, but he's no longer employed by UCLA.

That's them. There are a number of other doctors on there; looking up the people who worked for cryonics orgs is fun. Many of them have interesting histories, and many have moved on. The letter is pretty lightweight; it just says there's a credible chance that they can put you back together again after the big freeze. I think computer scientists dominate the list. That is a completely terrible sign.

There are other conversations here and elsewhere about the state of the brain involving interplay between the neurons that's not replicable with just the physical brain. There's also the failure to resuscitate anyone from brain death. This provides additional evidence that this won't work.

Finally, the people running the cryonics outfits have not had the best record of honesty and stability. If Google ran a cryonics outfit, that would be more interesting, for sure. But I don't think that's going to happen; this is not the route to very long life.

[Edit 1/14 - fixed a miscapitalization and a terrible sentence construction. No substantive changes.]

Comment author: jkaufman 13 January 2014 06:56:26PM *  6 points [-]

let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics

This is a little unfair: if you have neuroscience experience and think cryonics is very important, then going to work for Alcor or CI may be where you can have the most impact. At which point others note that you're financially dependent on people signing up for cryonics and write you off as biased.

Comment author: fezziwig 13 January 2014 07:45:26PM 7 points [-]

In a world where cryonics were obviously worthwhile to anyone with neuroscience expertise, one would expect to see many more cryonics-boosting neuroscientists than could be employed by Alcor and CI. Indeed, you might expect there to be more major cryonics orgs than just those two.

In other words, it's only unfair if we think size of the "neuroscientist" pool is roughly comparable to the size of the market for cryonics researchers. It's not, so IMO JRMayne raises an interesting point, and not one I'd considered before.

Comment author: James_Miller 13 January 2014 04:22:15PM 0 points [-]

Economists are the scientists most qualified to speculate on the likely success of cryonics because this kind of prediction involves speculating on long-term technological trends and although all of mankind is bad at this, economists at least try to do so with rigor.

Comment author: jkaufman 13 January 2014 07:51:11PM 5 points [-]

"How likely is it that the current cryonics process prevents information-theoretic death" is a question for neuroscientists, not economists.

Comment author: James_Miller 13 January 2014 10:11:43PM *  0 points [-]

Identical twins raised apart act fairly similarly, and economists are better qualified to judge this claim than neuroscientists. Given my DNA and all the information saved in my brain by cryonics it almost certainly would be possible for a super-intelligence with full nanotech to create something which would act similar to how I do in similar circumstances. For me at least, that's enough to preserve my identity and have cryonics work. So for me the answer to your question is almost certainly yes. To know if cryonics will work we need to estimate long-term tech trends to guess if Alcor could keep my body in tact long enough until someone develops the needed revival technologies.

Comment author: TheOtherDave 13 January 2014 10:16:35PM 2 points [-]

I'm curious... if P1 is the probability that a superintelligence with full nanotech can create something which would act similar to how you do in similar circumstances given your DNA and all the information in your cryonically frozen brain, and P2 is that probability given just your DNA, what's your estimate of P1/P2?

Comment author: James_Miller 13 January 2014 11:12:27PM 1 point [-]

Good point, especially if you include everything I have published in both P1 and P2 then P1 and P2 might be fairly close. This along with the possibility of time travel to bring back the dead is a valid argument against cryonics. Even in these two instances, cryonics would be valuable as a strong signal to the future that yes I really, really want to be brought back. Also, the more information the super-intelligence has the better job it will do. Cryonics working isn't a completely binary thing.

Comment author: TheOtherDave 13 January 2014 11:20:49PM 2 points [-]

So... it sounds like you're saying that your confidence that cryonic preservation differentially prevents information-theoretic death is relatively low (given that you estimate the results with and without it to be fairly close)... yes?

as a strong signal to the future that yes I really, really want to be brought back.

(nods)
What's your estimate of the signal-strength ratio, to such a superintelligence of your preferences in the matter, between (everything it knows about you + you signed up for cryonics) and (everything it knows about you + you didn't sign up for cryonics)?

Also, the more information the super-intelligence has the better job it will do. Cryonics working isn't a completely binary thing.

True.

Comment author: James_Miller 13 January 2014 11:51:27PM 0 points [-]

So... it sounds like you're saying that your confidence that cryonic preservation differentially prevents information-theoretic death is relatively low (given that you estimate the results with and without it to be fairly close)... yes?

Yes given an AI super-intelligence trying to bring me back.

What's your estimate of the signal-strength ratio, to such a superintelligence of your preferences in the matter, between (everything it knows about you + you signed up for cryonics) and (everything it knows about you + you didn't sign up for cryonics)?

I'm not sure. So few people have signed up for cryonics and given cryonics' significant monetary and social cost it does make for a powerful signal.

Comment author: TheOtherDave 14 January 2014 04:51:42AM 0 points [-]

Yes given an AI super-intelligence trying to bring me back.

If we assume there is no AI superintelligence trying to bring you back, what's your estimate of the ratio of the probabilities of information-theoretic death given cryonic preservation and absent cryonic preservation?

So few people have signed up for cryonics and given cryonics' significant monetary and social cost it does make for a powerful signal.

To a modern-day observer, I agree completely. Do you think it's an equally powerful signal to the superintelligence you posit?

Comment author: James_Miller 14 January 2014 05:24:29AM 0 points [-]

If we assume there is no AI superintelligence trying to bring you back, what's your estimate of the ratio of the probabilities of information-theoretic death given cryonic preservation and absent cryonic preservation?

I don't know enough about nanotech to give a good estimate of this path. The brain uploading path via brain scans is reasonable given cryonics and, of course, hopeless without it.

Do you think it's an equally powerful signal to the superintelligence you posit? Perhaps given that in part by signing up for cryonics I have probably changed my brain state to more want to outlive my natural death and this would be reflected in my writings.

Comment author: jkaufman 13 January 2014 10:39:05PM 0 points [-]

Have you considered getting your DNA sequenced and storing that in a very robust medium?

Comment author: James_Miller 13 January 2014 11:14:38PM 0 points [-]

Yes. I'm a member of 23andMe, although they don't do a full sequencing.

Comment author: jkaufman 14 January 2014 02:09:41AM 1 point [-]

Sorry, I should be more clear. You think your DNA is going to be really helpful to a superintelligence bringing you back, then it would make sense to try and increase the chances it stays around. 23andMe is a step in this direction, but as full genome sequencing gets cheaper at some point you should probably do that too. It's alreadfy much cheaper than cryonics and in a few years should be cheaper by an even larger margin.

Comment author: satt 11 January 2014 11:57:13AM 10 points [-]

I'm glad you attached your bounty to a concrete action (cancelling your cryonics subscription) rather than something fuzzy like "convincing me to change my mind". When someone offers a bounty for the latter I cynically expect them to use motivated cognition to explain away any evidence presented, and then refuse to pay out even if the evidence is very strong. (While you might still end up doing that here, the bounty is at least tied to an unambiguously defined action.)

Comment author: Kawoomba 11 January 2014 12:30:24PM 1 point [-]

Not really, because the sequence of events is "Change my mind", then "Cancel subscription", i.e. the latter hinges on the former. Hence, since "Change my mind" is a necessary prerequisite, the ambiguity remains.

Comment author: satt 11 January 2014 12:39:13PM 0 points [-]

When all is said & done, we may never know whether Chris Hallquist really did or really should have changed his mind. But, assuming Alcor/CI is willing to publicly disclose CH's subscription status, we will be able to decide unambiguously whether he's obliged to cough up $500.

Comment author: Kawoomba 11 January 2014 01:04:26PM 0 points [-]

Obviously a private enterprise won't publicly disclose the subscription status of its members.

He can publicly state whatever he wants regarding whether he changed his mind or not, no matter what he actually did. He can publicly state whatever he wants regarding whether he actually cancelled his subscription, no matter what he actually did.

If you assume OP wouldn't actually publicly lie (but still be subject to motivated cognition, as you said in the grandparent), then my previous comment is exactly right. You don't avoid any motivated cognition by adding an action which is still contingent on the problematic "change your mind" part.

In the end, you'll have to ask him "Well, did you change your mind?", and whether he answers you "yes or no" versus "I cancelled my subscription" or "I did not cancel my subscription" comes out to the same thing.

Comment author: James_Miller 11 January 2014 04:40:04PM 4 points [-]

When Alcor was fact checking my article titled Cryonics and the Singularity (page 21) for their magazine they said they needed some public source for everyone I listed as a member of Alcor. They made me delete reference to one member because my only source was that he had told me of his membership (and had given me permission to disclose it).

Comment author: Kawoomba 12 January 2014 08:24:34AM 0 points [-]

Good article, you should repost it as a discussion topic or in the open thread.

Comment author: satt 11 January 2014 03:26:38PM 0 points [-]

Obviously a private enterprise won't publicly disclose the subscription status of its members.

Not so obvious to me. CH could write to Alcor/CI explaining what he's done, and tell them he's happy for them to disclose his subscription status for the purpose of verification. (Even if they weren't willing to follow through on that, CH could write a letter asking them to confirm in writing that he's no longer a member, and then post a copy of the response. CH might conceivably fake such a written confirmation, but I find it very unlikely that CH would put words in someone else's mouth over their faked signature to save $500.)

Comment author: Alsadius 16 January 2014 06:56:09AM 3 points [-]

My objection to cryonics is financial - I'm all for it if you're a millionaire, but most people aren't. For most people, cryonics will eat a giant percentage of your life's total production of wealth, in a fairly faint-hope chance at resurrection. The exact chances are a judgement call, but I'd ballpark it at about 10%, because there's so very many realistic ways that things can go wrong.

If your cryonics insurance is $50/month, unless cryonics is vastly cheaper than I think it is, it's term insurance, and the price will jump drastically over time(2-3x per decade, generally). In other words, you're buying temporary cryonics coverage, not lifetime. That is not generally the sort of thing cryonics fans seem to want. Life insurance is a nice way to spread out the costs, but insurance companies are not in the business of giving you something for nothing.

Comment author: ChrisHallquist 16 January 2014 07:47:24AM 1 point [-]

$50/month is for universal life insurance. It helps that I'm young and a non-smoker.

Comment author: Alsadius 16 January 2014 08:05:35AM *  2 points [-]

What payout? And "universal life" is an incredibly broad umbrella - what's the insurance cost structure within the UL policy? Flat, limited-pay, term, YRT? (Pardon the technical questions, but selling life insurance is a reasonably large portion of my day job). Even for someone young and healthy, $50/mo will only buy you $25-50k or so. I thought cryonics was closer to $200k.

Comment author: ChrisHallquist 18 January 2014 03:53:26AM 0 points [-]

$100k. Cryonics costs vary with method and provider. I don't have exact up-to-date numbers, but I believe the Cryonics Institute charges ~$30k, while Alcor charges ~$80k for "neuro" (i.e. just your head) or ~$200k for full-body.

Comment author: Alsadius 19 January 2014 10:02:42PM 1 point [-]

Running the numbers, it seems you can get a bare-bones policy for that. I don't tend to sell many bare-bones permanent policies, though, because most people buying permanent insurance want some sort of growth in the payout to compensate for inflation. But I guess with cheaper cryo than I expected, the numbers do add up. Cryo may be less crazy than I thought.

Comment author: Daniel_Burfoot 11 January 2014 03:07:49PM *  3 points [-]

How low would your estimate have to get before you canceled your subscription? I might try to convince you by writing down something like:

P(CW) = P(CW | CTA) * P(CTA)

Where CW = "cryonics working for you" and CTA = "continued technological advancement in the historical short term", and arguing that your estimate of P(CTA) is probably much too high. Of course, this would only reduce your overall estimate by 10x at most, so if you still value cryonics at P=0.03 instead of P=0.3, it wouldn't matter.

Comment author: topynate 11 January 2014 12:28:35PM 10 points [-]

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

Comment author: jowen 13 January 2014 11:21:52PM 1 point [-]

This argument has made me start seriously reconsidering my generally positive view of cryonics. Does anyone have a convincing refutation?

The best I can come up with is that if resuscitation is likely to happen soon, we can predict the values of the society we'll wake up in, especially if recovery becomes possible before more potentially "value disrupting" technologies like uploading and AI are developed. But I don't find this too convincing.

Comment author: topynate 15 January 2014 08:22:37PM 1 point [-]

My attempt at a reply turned into an essay, which I've posted here.

Comment author: Ishaan 11 January 2014 08:26:31PM *  0 points [-]

This answer raises the question of how narrow the scope of the contest is:

Do you want to specifically hear arguments from scientific evidence about how cryonics is not going to preserve your consciousness?

Or, do you want arguments not to do cryonics in general? Because that can also be accomplished via arguments as to the possible cons of having your consciousness preserved, arguments towards opportunity costs of attempting it (effective altruism), etc. It's a much broader question.

(Edit - nevermind, answered in the OP upon more careful reading)

Comment author: JTHM 11 January 2014 08:26:44PM *  4 points [-]

Let me attempt to convince you that your resurrection from cryonic stasis has negative expected value, and that therefore it would be better for you not to have the information necessary to reconstruct your mind persist after the event colloquially known as "death," even if such preservation were absolutely free.

Most likely, your resurrection would require technology developed by AI. Since we're estimating the expected value of your resurrection, let's work on the assumption that the AGI will be developed.

Friendly AI is strictly more difficult to develop than AI with values orthogonal to ours or malevolent AI. Because the FAI developers are at such an inherent disadvantage, AGI tech will be most used by those least concerned with its ethical ramifications. Most likely, this will result in the extinction of humanity. But it might not. In the cases where humanity survives but technology developed by AGI continues to be used by those who are little concerned with its ramifications, it would be best for you not to exist at all. Since those with moral scruples would be the most averse to wantonly duplicating, creating, or modifying life, we can assume that those doing such things most often will be vicious psychopaths (or fools who might as well be), and that therefore the amount of suffering in the world inflicted on those synthetic minds would greatly outweigh any increased happiness of biological humans. A world where a teenager can take your brain scan remotely with his iPhone in the year 2080 and download an app that allows him to torture an em of you for one trillion subjective years every real second is a world in which you'd be best off not existing in any form. Or you could find yourself transformed into a slave em forced to perform menial mental labor until the heat death of the universe.

Likely? No. More likely than FAI taking off first, despite the massive advantage the unscrupulous enjoy in AGI development? I think so. Better to die long before that day comes. For that matter, have yourself cremated rather than decaying naturally, just in case.

Comment author: BaconServ 12 January 2014 09:46:28PM 2 points [-]

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

Comment author: byrnema 12 January 2014 03:04:00AM *  2 points [-]

If it could be done, would you pay $500 for a copy of you to be created tomorrow in a similar but separate alternate reality?(Like an Everette branch that is somewhat close to ours, but faraway enough that you are not already in it?)

Given what we know about identity, etc., this is what you are buying.

Personally, I wouldn't pay five cents.

Unless people that you know and love are also signed up for cryonics? (In which case you ought to sign up, for lots of reasons including keeping them company and supporting their cause.)

Comment author: Mark_Friedenbach 12 January 2014 07:08:12AM 0 points [-]

Cryonics does not necessarily imply uploading. It is possible that using atomically precise medical technology we could revive and rebuild the brain and body in-situ, thereby retaining continuity.

Comment author: byrnema 12 January 2014 08:12:05AM *  0 points [-]

I meant a physical copy.

Would it make a difference, to you, if they rebuilt you in-situ, rather than adjacent?

But I just noticed this set of sentences, so I was incorrect to assume common ideas about identity:

In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing,

Comment author: Mark_Friedenbach 12 January 2014 08:19:05AM 0 points [-]

I know. I was pointing out that your thought experiment might not actually apply to the topic of cryonics.

Comment author: Mark_Friedenbach 11 January 2014 06:21:25PM *  3 points [-]

Be aware that you are going to get a very one-sided debate. I am very much pro-cryonics, but you're not going to hear much from me or others like me because (1) I'm not motivated to rehash the supporting arguments, and (2) attaching monetary value actually deentivises me from participating (particularly when I am unlikely to receive it).

ETA: Ok, I said that and then I countered myself by being compelled to respond to this point:

In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues.

Issues of mind-uploading should not affect your decision. I personally am convinced that the reigning opinion on mind uploading and personal identity is outright wrong - if they destructively upload my mind then they might as well thaw out and cremate me. There would be no continuity of consciousness and I would not benefit.

My own application for cyronics membership is held up in part because I'm still negotiating a contract that forces them to preserve me for revival only, not uploading, but that should be sorted out soon. All you need to do is make your wishes clear and legally binding.

Comment author: kalium 12 January 2014 04:39:52AM 2 points [-]

Why shouldn't uploading affect his decision? If he's resurrected into a physical body and finds the future is not a place he wants to live, he can opt out by destroying his body. If he's uploaded, there is very plausibly no way out.

Comment author: Ishaan 11 January 2014 09:43:23PM *  1 point [-]

Curious - would you retain this belief if uploading actually happened, the uploaded consciousnesses felt continuity, and external observers could tell no difference between the uploaded consciousnesses and the original consciousnesses?

(Because if so, you can just have an "only if it works for others may you upload me" clause)

Comment author: Mark_Friedenbach 12 January 2014 07:04:24AM *  1 point [-]

To whom are you asking the question? I'd be dead. That computer program running a simulation of me would be a real person, yes, with all associated moral implications. It'd even think and behave like me. But it wouldn't be me - a direct continuation of my personal identity - anymore than my twin brother or any of the multiverse copies of "me" are actually me. If my brain was still functioning at all I'd be cursing the technicians as they ferry my useless body from the uploader to the crematorium. Then I'd be dead while some digital doppelgänger takes over my life.

Do you see? This isn't about whether uploading works or not. Uploading when it works creates a copy of me. It will not continue my personal existence. We can be sure of this, right now.

Comment author: TheOtherDave 12 January 2014 08:00:16PM 1 point [-]

On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?

I mean, I assume that the person reading this response thinks and behaves like the same person (more or less), and that it remembers having been the person who wrote the comment, but that's just thought and behavior and memory, and on your account those things don't determine identity.

So, on your account, what does determine identity? What observations actually constitute evidence that you're the same person who wrote that comment? How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?

Comment author: Mark_Friedenbach 12 January 2014 08:54:46PM 1 point [-]

On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?

By examining the history of interactions which occured between the two states.

How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?

Because it is very easy to construct thought experiments which show that thought, behavior, and memory are not sufficient for making a determination. For example, imagine a non-destructive sci-fi teleporter. The version of you I'm talking to right now walks into the machine, sees some flashing lights, and then walks out. Some time later another Dave out of a similar machine on Mars. Now step back a moment in time. Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?

Comment author: TheOtherDave 12 January 2014 08:57:13PM 1 point [-]

By examining the history of interactions which occured between the two states.

Well, yes, but what are you looking for when you do the examination?

That is, OK, you examine the history, and you think "Well, I observe X, and I don't observe Y, and therefore I conclude identity was preserved." What I'm trying to figure out is what X and Y are.

Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?

Both.

Comment author: Dentin 13 January 2014 09:49:27PM 0 points [-]

With 50% probability, I expect to walk back out, and with 50% probability I expect to wake up on mars. Both copies will feel like and believe that they are the original, and both copies will believe they are the 'original'.

Comment author: Mark_Friedenbach 14 January 2014 09:58:31AM 1 point [-]

But you expect one or the other, right? In other words, you don't expect to experience both futures, correct?

Now what if the replicator on Mars gets stuck, and starts continuously outputting Dentins. What is your probability of staying on Earth now?

Further, doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

Comment author: TheOtherDave 14 January 2014 08:12:46PM *  2 points [-]

Not Dentin, but since I gave the same answer above I figured I'd weigh in here.

you expect one or the other, right? In other words, you don't expect to experience both futures, correct?

I expect to experience both futures, but not simultaneously.

Somewhat similarly, if you show me a Necker cube, do I expect to see a cube whose front face points down and to the left? Or a cube whose front face points up and to the right? Well, I expect to see both. But I don't expect to see both at once... I'm not capable of that.

(Of course, the two situations are not the same. I can switch between views of a Necker cube, whereas after the duplication there are two mes each tied to their own body.)

what if the replicator on Mars gets stuck [..] What is your probability of staying on Earth now?

I will stay on Earth, with a probability that doesn't change.
I will also appear repeatedly on Mars.

doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars,

Well, sure, in the real world it seems very odd to take this possibility seriously. And, indeed, it never seems to happen, so I don't take it seriously... I don't in fact expect to wake up on Mars.

But in the hypothetical you've constructed, it doesn't seem odd at all... that's what a nondestructive teleporter does.

and someone else takes over your life on Earth?

(shrug) In ten minutes, someone will take over my life on Earth. They will resemble me extremely closely, though there will be some small differences. I, as I am now, will no longer exist. This is the normal, ordinary course of events; it has always been like this.

I'm comfortable describing that person as me, and I'm comfortable describing the person I was ten minutes ago as me, so I'm comfortable saying that I continue to exist throughout that 20-minute period. I expect me in 10 minutes to be comfortable describing me as him.

If in the course of those ten minutes, I am nondestructively teleported to Mars, someone will still take over my life on Earth. Someone else, also very similar but not identical, will take over my life on Mars. I'm comfortable describing all of us as me. I expect both of me in 10 minutes to be comfortable describing me as them.

That certainly seems odd, but again, what's odd about it is the nondestructively teleported to Mars part, which the thought experiment presupposes.

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

It will travel along with my body, via whatever mechanism allows that to be transferred. (Much as my subjective experience travels along with my body when I drive a car or fly cross-country.)

It would be odd if it did anything else.

Comment author: Dentin 14 January 2014 06:42:48PM 1 point [-]

No, I would never expect to simultaneously experience being on both Mars and Earth. If you find anyone who believes that, they are severely confused, or are trolling you.

If I know the replicator will get stuck and output 99 dentins on Mars, I would only expect a 1% chance of waking up on earth. If I'm told that it will only output one copy, I would expect a 50% chance of waking up on earth, only to find out later that the actual probability was 1%. The map is not the territory.

Further, doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?

Not at all. In fact, it seems odd to me that anyone would be surprised to end up on Mars.

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

Because conciousness is how information processing feels from the inside, and 'information processing' has no intrinsic requirement that the substrate or cycle times be continuous.

If I pause a playing wave file, copy the remainder to another machine, and start playing it out, it still plays music. It doesn't matter that the machine is different, that the decoder software is different, that the audio transducers are different - the music is still there.

Another, closer analogy is that of the common VM: it is possible to stop a VPS (virtual private server), including operating system, virtual disk, and all running programs, take a snapshot, copy it entirely to another machine halfway around the planet, and restart it on that other machine as though there were no interruption in processing. The VPS may not even know that anything has happened, other than suddenly its clock is wrong compared to external sources. The fact that it spent half an hour 'suspended' doesn't affect its ability to process information one whit.

Comment author: ArisKatsaris 12 January 2014 12:30:42PM 1 point [-]

Uploading when it works creates a copy of me. It will not continue my personal existence.

I honestly don't know how "copy" is distinct from "continuation" on a physical level and/or in regards to 'consciousness'/'personal existence'.

If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it's wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as 'continuations' rather than 'copies'. Why would uploading be different?

Mind you, I'm not saying it necessary isn't -- but I understand too little about consciousness to argue about it definitively and with the certainty you claim one way or another.

Comment author: Mark_Friedenbach 12 January 2014 06:32:33PM *  1 point [-]

If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it's wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as 'continuations' rather than 'copies'. Why would uploading be different?

It's not any different, and that's precisely the point. Do you get to experience what your MWI copies are doing? Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all? So it goes with uploading. That person created by uploading will not be you any more than some alternate dimension copy is you. From the outside I wouldn't be able to tell the difference, but for you it would be very real: you, the person I am talking to right now, will die, and some other sentient being with your implanted memories will take over your life. Personally I don't see the benefit of that, especially when it is plausible that other choices (e.g. revival) might lead to continuation of my existence in the way that uploading does not.

Comment author: ArisKatsaris 12 January 2014 07:16:56PM *  1 point [-]

Do you get to experience what your MWI copies are doing?

Uh, the present me is experiencing none of the future. I will "get to experience" the future, only via all the future copies of me that have a remembered history that leads back to the present me.

Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all?

If none of the future mes exist, then that means I'm dead. So of course I care because I don't want to die?

I think we're suffering from a misunderstanding here. The MWI future copy versions of me are not something that exist in addition to the ordinary future me, they are the ordinary future me. All of them are, though each of them has only one remembered timeline.

That person created by uploading will not be you any more than some alternate dimension copy is you.

Or "that person created by uploading will be as much me as any future version of me is me".

Comment author: Mark_Friedenbach 12 January 2014 07:20:50PM 0 points [-]

I'm a physicist, I understand perfectly well MWI. Each time we decohere we end up on one branch and not the others. Do you care at all what happens on the others? If you do, fine, that's very altruistic of you.

Comment author: ArisKatsaris 12 January 2014 07:33:39PM *  0 points [-]

Let me try again.

First example: Let's say that tomorrow I'll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?

No, you can't. Because you don't know which branch I'll "end up on" (in fact I don't consider that statement meaningful, but even if it was meaningful, we wouldn't know which branch I'd end up on). So now I have to care about those two future branches equally. Until I know which one of these I'll "end up on", I have no way to judge between them.

Second example. Let's say that tomorrow instead of decohering via MWI physics, I'll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?

Seems to me that like in the first example I can't know which of the two branches "I'll end up on". So now I must care about the two future versions equally.

Comment author: Mark_Friedenbach 12 January 2014 07:38:54PM -1 points [-]

Let's say that tomorrow instead of decohering via MWI physics, I'll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

Seems to me that like in the first example I can't know which of the two branches "I'll end up on". So now I must care about the two future versions equally.

Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?

Comment author: Dentin 14 January 2014 12:25:02AM 0 points [-]

Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?

In the case of MWI physics, I don't care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.

Comment author: Dentin 13 January 2014 09:54:06PM 0 points [-]

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

This statement requires evidence or at least a coherent argument.

Comment author: ArisKatsaris 12 January 2014 07:41:14PM *  0 points [-]

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

Why are you saying that? If you don't answer this question, of why you believe there's no chance of ending up as the upload, what's the point of writing a single other word in response?

I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there's no chance I'll end up as version U.

Comment author: ephion 12 January 2014 04:06:24PM 0 points [-]

The copy will remember writing this, and will feel pretty strongly that it's a continuation of you.

Comment author: Mark_Friedenbach 12 January 2014 07:41:20PM 0 points [-]

So? So all the other Everett branches distinct from me. So would some random person implanted with my memories. I don't care what it thinks or feels, what I care about is whether it actually is a direct continuation of me.

Comment author: Dentin 12 January 2014 02:38:34PM 0 points [-]

I'm sorry to hear that. It's unfortunate for you, and really limits your options.

In my case, uploading does continue my personal existence, and uploading in my case is a critical aspect of getting enough redundancy in my self to survive black swan random events.

Regarding your last sentence, "We can be sure of this, right now", what are you talking about exactly?

Comment author: Mark_Friedenbach 12 January 2014 07:42:37PM 1 point [-]

Regarding your last sentence, "We can be sure of this, right now", what are you talking about exactly?

I mean we can do thought experiments which show prettying convincingly that I should not expect to experience the other end of uploading.

Comment author: Dentin 13 January 2014 09:50:49PM 0 points [-]

What might those thought experiments be? I have yet to hear any convincing ones.

Comment author: Mark_Friedenbach 14 January 2014 10:00:26AM 1 point [-]

The teleporter arguments we've already been discussing, and variants.

Comment author: Ishaan 12 January 2014 12:04:37PM *  0 points [-]

OK, I was just checking.

There were two ways to interpret your statement - that uploaded won't be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).

I was just wondering which one it was. I'm interested in hearing arguments against uploading

-How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?

-Would you ever step into a sci-fi style teleporter?

-cryonics constitutes "pausing" and "resuming" yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?

Comment author: Mark_Friedenbach 12 January 2014 07:14:22PM 1 point [-]

There were two ways to interpret your statement - that uploaded won't be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).

The latter, but they are both empirical questions. The former deals with comparing informational configurations at two points in time, whereas the latter is concerned with the history of how we went from state A to state B (both having real-world implications).

How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?

We need more research on the physical basis for consciousness to understand this better such that we can properly answer the question. Right now all we have is the fleeting experience of continued identity moment to moment, and the induction principle which is invalid to apply over singular events like destructive uploading.

My guess as to the underlying nature of the problem is that consciousness exists in any complex interaction of particles - not the pattern itself, but the instantiation of the computation. And so long as this interaction is continuous and ongoing we have a physical basis for the continuation of subjective experience.

Would you ever step into a sci-fi style teleporter?

Never, for the same reasons.

Cryonics constitutes "pausing" and "resuming" yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?

Pausing is a metaphor. You can't freeze time and chemistry never stops entirely. The particles in a cryonic patient's brain keep interacting in complex, albeit much slowed down ways. Recall that the point of pumping the brain full of anti-freeze is that it remains intact and structurally unmolested even at liquid nitrogen temperatures. It is likely that some portion of biological activity is ongoing in cryostatasis albeit at a glacial pace. This may or may not be sufficient for continuity of experience, but unlike uploading the probability is at least not zero.

BTW the problem with teleporting is not spatial or temporal. The problem is that the computational process which is the subjective experience of the person being teleported is interrupted. The machine violently disassembles them and they die, then somewhere else a clone/copy is created. If you have trouble seeing that, imagine that the process is not destructive. You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you're still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to "wake up" as at the other end of the process?

Comment author: Dentin 13 January 2014 09:58:36PM 0 points [-]

The problem is that the computational process which is the subjective experience of the person being teleported is interrupted.

It sounds to me like you're ascribing some critical, necessary aspect of consciousness to the 'computation' that occurs between states, as opposed to the presence of the states themselves.

It strikes me as similar to the 'sampling fallacy' of analog audio enthusiasts, who constantly claim that digitization of a recording is by definition lossy because a discrete stream can not contain all the data needed to reconstruct a continuous waveform.

Comment author: Mark_Friedenbach 14 January 2014 10:07:08AM 0 points [-]

It sounds to me like you're ascribing some critical, necessary aspect of consciousness to the 'computation' that occurs between states, as opposed to the presence of the states themselves.

Absolutely (although I don't see the connection to analog audio). Is a frozen brain conscious? No. It is the dynamic response of brain from which the subjective experience of consciousness arises.

See a more physical explanation here.

Comment author: Dentin 14 January 2014 06:23:23PM 0 points [-]

The connection to analog audio seems obvious to me: a digitized audio file contains no music, it contains only discrete samples taken at various times, samples which when played out properly generate music. An upload file containing the recording of a digital brain contains no conciousness, but is concious when run, one cycle at a time.

A sample is a snapshot of an instant of music; an upload is a snapshot of conciousness. Playing out a large number of samples creates music; running an upload forward in time creates conciousness. In the same way that a frozen brain isn't concious but an unfrozen, running brain is - an uploaded copy isn't concious, but a running, uploaded copy is.

That's the point I was trying to get across. The discussion of samples and states is important because you seem to have this need for transitions to be 'continuous' for conciousness to be preserved - but the sampling theorem explicitly says that's not necessary. There's no 'continuous' transition between two samples in a wave file, yet the original can still be reconstructed perfectly. There may not be a continous transition between a brain and its destructively uploaded copy - but the original and 'continuous transition' can still be reconstructed perfectly. It's simple math.

As a direct result of this, it seems pretty obvious to me that conciousness doesn't go away because there's a time gap between states or because the states happen to be recorded on different media, any more than breaking a wave file into five thousand non-contiguous sectors on a hard disk platter destroys the music in the recording. Pretty much the only escape from this is to use a mangled definition of conciousness which requires 'continuous transition' for no obvious good reason.

Comment author: Mark_Friedenbach 14 January 2014 07:59:51PM 0 points [-]

I'm not saying it goes away, I'm saying the uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, but it is not me in the sense that if I walk into an uploader I expect to walk out again in my fleshy body. Maybe that scan is then used to start a simulation from which arises a fully conscious copy of me, but I don't expect to directly experience what that copy experiences.

Comment author: Dentin 15 January 2014 12:46:38AM 0 points [-]

The uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, and it is me in the sense that I expect with high probability to wake up as an upload and watch my fleshy body walk out of the scanner under its own power.

Of course I wouldn't expect the simulation to experience the exact same things as the meat version, or expect to experience both copies at the same time. Frankly, that's an idiotic belief; I would prefer you not bring it into the conversation in the future, as it makes me feel like you're intentionally trolling me. I may not believe what you believe, but even I'm not that stupid.

Comment author: Ishaan 12 January 2014 10:40:58PM *  0 points [-]

You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you're still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to "wake up" as at the other end of the process?

My current thought on the matter is that Ishaan0 stepped into the elevator, Ishaan1a stepped out of the elevator, and Ishaan1b was replicated by the elevator.

At time 2, Ishaan2a was shot, and Ishaan2b survived.

Ishaan0 -> ishaan1a --> ishaan2a just died.

Ishaan0 -> ishaan1b--->ishaan2b--->ishaan3b --->... gets to live on.

So Ishaan0 can be said to have survived, whereas ishaan1a has died.

Right now all we have is the fleeting experience of continued identity moment to moment

The way I see it, my past self is "dead" in every respect other than that my current self exists and contains memories of that past self.

I don't think there is anything fundamental saying we aught to be able to have "expectations" about our future subjective experiences, only "predictions" about the future.

Meaning, if ishaan0 had a blindfold on, then at time1 when I step out of the teleporter I would have memories which indicate that my current qualia qualify me to be in the position of either Ishaan1a or Ishaan1b. When I take my blindfold off, I find out which one I am.

Comment author: DanielLC 11 January 2014 07:11:51PM 0 points [-]

I am very much pro-cryonics, but you're not going to hear much from me or others like me because ...

He has already heard from others like you. The point is for him to find the arguments he hasn't heard, which tend to be the ones against cryonics.

My own application for cyronics membership is held up in part because I'm still negotiating a contract that forces them to preserve me for revival only, not uploading, but that should be sorted out soon.

That sounds much more difficult and correspondingly less likely to be accomplished.

Comment author: notsonewuser 11 January 2014 09:14:02PM 3 points [-]

You have read the full kalla724 thread, right?

I think V_V's comment is sufficient for you to retract your cryonics subscription. If we get uFAI you lose anyways, so I would be putting my money into that and other existential risks. You'll benefit a lot more people that way.

Comment author: Furcas 11 January 2014 10:34:12PM *  4 points [-]

Kalla724 is strongly convinced that the information that makes us us won't be preserved by current cryonics techniques, and he says he's a neuroscientist. Still, it would be nice if he'd write something a bit more complete so it could be looked at by other neuroscientists who could then tell us if he knows what he's talking about, at least.

Comment author: ChrisHallquist 13 January 2014 03:00:50AM *  5 points [-]

I had read some of that thread, and just went and made a point of reading any comments by kalla724 that I had missed. Actually, I had them in mind when I made this thread - hoping that $500 could induce a neuroscientist to write the post kalla724 mentioned (but as far as I can tell never wrote), or or else be willing to spend a few hours fielding questions from me about cryonics. I considered PMing kalla724 directly, but they don't seem to have participated in LW in some time.

Edit: PM'd kalla724. Don't expect a response, but seemed worth the 10 seconds on that off-chance.

Comment author: lmm 11 January 2014 11:19:55AM *  1 point [-]

I work in software. I once saw a changelog that said something like " * session saving (loading to be implemented in a future version)", and I laughed out loud. The argument in favour of cryonics seems to boil down to "we can't see why revival won't work", which is basically meaningless for a system this complex and poorly-understood. How can we be at all confident that we're preserving memories when we don't even know how they're encoded? I can't predict exactly what crucial thing we will have missed preserving. But I can predict we will have missed something.

I think it requires an incredible degree of fine-tuning of our future-tech assumptions to say that our post-singularity overlords will be able to revive people who were frozen, but not people who weren't.

Comment author: Luke_A_Somers 11 January 2014 01:43:52PM *  11 points [-]

I found myself in that situation once.

When I wrote the loader, the saved-game files worked.

Of course, that was because I just took the whole game data object and serialized it into a file stream. Similarly, here, we're storing the actual thing.

Last paragraph: ha. Restoring someone who wasn't frozen requires time travel. If cryo works and time travel doesn't, there you go.

Comment author: VAuroch 13 January 2014 07:56:09AM -1 points [-]

It doesn't necessarily involve time travel. It could just require extremely precise backwards extrapolation.

And if it does involve time travel, it only requires the travel of pure information from the past to its future. And since information can already be transmitted to its future light cone, the idea that it's possible to specify a particular location in spacetime sufficiently specifically that you can induce a process to transfer information about that specified location to a specific point in its future lightcone (i.e. your apparatus).

Which still sounds extremely difficult, but also much more likely to be possible than describing it as time travel.

For the record, I assign the possibility of time travel that could travel to our current point in time as epsilon, the possibility of time travel that can travel to no point earlier than the creation of the specific time machine as very small (<0.1%) but greater than epsilon, and the possibility of the outlined information-only "time travel" as in the range of 0.1%-1%.

Comment author: Luke_A_Somers 13 January 2014 01:53:59PM 1 point [-]

The ability to radiate light into space means that nope, you need to catch up to all those photons. Second law murders extrapolation like that.

Comment author: VAuroch 13 January 2014 07:25:49PM -1 points [-]

That's true, slipped my mind.

Comment author: Humbug 11 January 2014 08:31:04PM *  1 point [-]

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

Comment author: Gunnar_Zarncke 16 February 2014 10:14:56AM 1 point [-]

One rational utilitarian argument I haven't seen here but which was brought up in an old thread is that cryonics competes with organ donation.

With organ donation you can save on average more than one life (the thread mentions 3.75, this site says "up to 8") wheras cryonics saves only <0.1 (but your own life).

And you probably can't have both.

Comment author: handoflixue 19 January 2014 08:59:09AM 0 points [-]

It's easy to get lost in incidental costs and not realize how they add up over time. If you weren't signed up for cryonics, and you inherited $30K, would you be inclined to dump it in to a cryonics fund, or use it someplace else? If the answer is the latter, you probably don't REALLY value cryonics as much as you think - you've bought in to it because the price is spread out and our brains are bad at budgeting small, reoccurring expenses like that.

My argument is pretty much entirely on the "expense" side of things, but I would also point out that you probably want to unpack your expectations from cryonics: Are you assuming you'll live infinite years? Live until the heat death of the universe? Gain an extra 200 years until you die in a situation cryonics can't fix? Gain an extra 50 years until you die of a further age limit?

When I see p(cryonics) = 0.3, I tend to suspect that's leaning more towards the 50-200 year side of things. Straight-up immortal-until-the-universe-ends seems a LOT less likely than a few hundred extra years.


Where'd that $30K figure come from?

You've said you're young and have a good rate on life insurance, so let's assume male (from the name) and 25. Wikipedia suggests you should live until you're 76.

$50/month * 12 months/year * (76-25 = 51 years) = $30,600.

So, it's less that you're paying $50/month and more that you're committing to pay $30,000 over the course of your life.


What else could you do with that same money?

Portland State University quotes ~$2500/semester for tuition. 3 semesters/year and 4 years/degree ~= $30K. Pretty sure you can get loans and go in to debt for this, so it's still something you could pay off over time. And if you're smart, do community college for the first two years, get a scholarship, etc., you can probably easily knock enough off to make up for interest charges.

Comment author: ChrisHallquist 19 January 2014 09:47:50AM -1 points [-]

I'm not that young--I graduated collect four years ago. If I inherited ~30k, it would go into a generic early start on retirement / early start on hypothetical kids' college fund / maybe downpayment on a condo fund. Given that I'd just be holding on to it in the short-term anyway, putting it in a cryonics fund doesn't actually strike me as completely crazy. Even in that case, though I think I'd get the insurance anyway, so I'd know the inheritance money could be used for anything I needed for when said need arose. Also, I understand that funding through insurance can avoid legal battles over the money.

Comment author: handoflixue 20 January 2014 05:01:31AM 0 points [-]

The average college graduate is 26, and I was estimating 25, so I'd assume that by this community's standards, you're probably on the younger side. No offense was intended :)

I would point out that by the nature of it being LIFE insurance, it will generally not be used for stuff YOU need, nor timed to "when the need arises". That's investments, not insurance :)

(And if you have 100K of insurance for $50/month that lets you early-withdrawal AND isn't term insurance... then I'd be really curious how, because that sounds like a scam or someone misrepresenting what your policy really offers :))

Comment author: polymathwannabe 11 January 2014 07:37:16PM *  0 points [-]

Let's suppose your mind is perfectly preserved (in whatever method they choose to use). Let's suppose you retain the continuity of your memories and you still feel you are "you." Let's suppose the future society is kinder, nicer, less wasteful, more tolerant, and every kid owns a puppy. Let's suppose the end of fossil fuels didn't destroy civilization because we were wise enough to have an alternative ready in time. Let's suppose we managed to save the ozone layer and reverse global warming and the world is still a more-or-less pleasant place to live in. Let's suppose the future society has actually competent people in political positions.

Good! But still...

What body do you end up having? Even if the future doctors can clone a whole new, young, strong body from your DNA (and remove all your potential genetic diseases), that doesn't mean you're immortal. Physical destruction of the body (from accidents, natural disasters, etc.) is still a concern. Your new body would still need to have cryonics insurance in case anything happens to it. And there's always the risk of spontaneous mutations that will ruin everything: http://www.nytimes.com/2014/01/05/sunday-review/why-everyone-seems-to-have-cancer.html?_r=0 Even if sharks don't naturally die from aging, the mere fact of them living more years increases the probability that they'll eventually find something that kills them. Digital uploading is no guarantee of immortality either. Hard drives can be damaged and destroyed too. Even after getting used to a billion years of subjective existence, you will never really, really be able to shake off the fear of annihilation from unforeseen causes. Even if you (or any of your future copies, which is no guarantee of continued identity) are one of the few lucky who make it to the end of the universe, you will still die. If a heart attack didn't get you, entropy will. So it really doesn't matter how much of an effort you make. In forty years or forty eons, you will still die. What that means to you will depend on how much you plan to do with that time, but unless we find a way to reboot the universe AND survive the reboot AND find ourselves in an environment where life can survive, the last enemy will still be undefeatable.

Comment author: polymathwannabe 11 January 2014 07:54:41PM 0 points [-]

On the other hand, you're actually paying people to get you to forfeit your chance at eternity. To paraphrase religious language, you're dangerously selling your soul too short.

Comment author: gjm 11 January 2014 08:06:31PM 0 points [-]

I don't follow how this is an argument against cryonics, unless you're talking to someone who really truly believed that cryonics meant a serious chance of actual literal immortality.

(Also, I have seen it alleged that at least one plausible model of the future of the universe has it dying after finite time, but in such a way that an infinite amount of computation can be done before the end. So it's not even entirely obvious you couldn't be subjectively immortal given sufficiently advanced technology. Though I think there have been cosmological discoveries since this model was alleged to be plausible that may undermine its plausibility.)

Comment author: Dentin 11 January 2014 06:22:24PM 0 points [-]

After I ran my estimates, I concluded that cryonics raised my odds of living to ~90 years old by approximately 5% absolute, from 50% to 55%. It's not very much, but that 5% was enough for me to justify signing up.

I think the most important part is to be honest about the fact that cryonics is a fairly expensive safety net largely consisting of holes. There are many unknowns, it relies on nonexistent technology, and in many scenarios you may become permanently dead before you can be frozen. That said, it does increase your odds of long term survivability.

Comment author: [deleted] 11 January 2014 05:11:03PM 0 points [-]

Doesn't this thread go against the principles of The Bottom Line?

Comment author: DanielLC 11 January 2014 07:08:55PM 5 points [-]

Not entirely. It's well known that, if you can't find an unbiased opinion, it's good to at least get biases from different directions. He has already seen the arguments in favor of cryonics. Repeating them would be wasting his time. Now he wants to find the arguments against. If they are more convincing than he expected, his expectations of cryonics working will go down. Otherwise, they will go up.

Comment author: itaibn0 11 January 2014 01:26:54PM *  0 points [-]

It's worth mentioning that anyone with a strong argument against cryonics is likely to believe that you will be persuaded by it (due to low base-rates for these kinds of conversions). Thus the financial incentive is not as influential as you would like it to be.

Added: Relevant prediction

Comment author: wuncidunci 11 January 2014 01:32:00PM 2 points [-]

If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

Comment author: Trevor_Blake 12 January 2014 03:49:45PM -1 points [-]

The definition of science that I prefer is: a theory that can be tested and shown to fail. If a theory gives itself room to always add one more variable and thus never be shown to fail, it might be useful or beautiful or powerful or comforting but it won't be science. Revival 'some day' can always be one more day away, one more variable added.

Comment author: Prismattic 12 January 2014 01:03:45AM -1 points [-]

I will pay $500 to anyone who can convince me to NOT X

Is incentivizing yourself to X. Not ideal for being open to genuinely changing your mind.

Comment author: jkaufman 12 January 2014 06:18:21AM 3 points [-]

He stands to save a lot of money over the years by canceling his subscription, much more than this $500. The net short and medium term (which of course ignores the potential, long term, payoff of cryonics working) incentive is towards changing his mind and believing "not X", he's just offering to split some of that incentive with us.

Comment author: Ishaan 11 January 2014 08:56:56PM *  -1 points [-]

This post inspired me to quickly do this calculation. I did not know what the answer would be when I started. It could convince you in either direction really, depending on your level of self/altruism balance and probability estimate.

Cost of neuro-suspension cryonics > $20,000

Cost of saving a single life via effective altruism, with high certainty < $5,000

Let's say you value a good outcome with a mostly-immortal life at X stranger's regular-span lives.

Let "C" represent the threshold of certainty that signing up for cryonics causes that good outcome.

C*X / $20,000 > 1 / $5,000

C > 4/x

Conclusion: with estimates biased towards the cryonics side of the equation... in order to sign up your minimum certainty that it will work as expected must be four divided by the number of strangers you would sacrifice your immortality for.

If you value immortality at the cost of 4 strangers, you should sign up for cryonics instead of E.A. only if you are 100% certain it will work.

If you value immortallity at the cost of 400 strangers, you should sign of for cryonics instead of E.A. only if you are more than 1% certain it will work.

(^ Really what is happening here is that at the cost of 4 strangers you are taking a gamble on a 1% chance..but it amounts to the same thing if you shut up and multiply)

The numbers for whole-body suspension will be rather different.

Comment author: solipsist 11 January 2014 09:38:58PM *  4 points [-]

This sort of utilitarian calculation should be done with something like QALYs, not lives. If the best charities extend life at $150 per QALY, and a $20,000 neuro-suspension extends life by a risk-adjusted 200 QALYs, then purchasing cryonics for yourself would be altruistically utilitarian.

Comment author: jkaufman 12 January 2014 06:22:12AM 2 points [-]

These calculations get really messy because the future civilization reviving you as an upload is unlikely to have their population limited by frozen people to scan. Instead they probably run as many people as they have resources or work for, and if they decide to run you it's instead of someone else. There are probably no altruistic QALYs in preserving someone for this future.

Comment author: solipsist 15 January 2014 02:42:36AM 0 points [-]

This reply made me really think, and prompted me to ask this question.

Comment author: Ishaan 11 January 2014 09:53:39PM *  1 point [-]

True, but that's much harder to estimate (because real world QALY data) and involves more uncertainty (how many QALYs to expect after revival?) and I didn't want that much work - just a quick estimate.

However, I'm guessing someone else has done this properly at some point?

Comment author: solipsist 11 January 2014 11:15:13PM 1 point [-]

However, I'm guessing someone else has done this properly at some point?

Note: I have not, so do not use my 200 QALYs as an anchor.

Comment author: somervta 12 January 2014 02:22:00AM -1 points [-]

<sarcasm>

Yes. Because instructing people to avoid anchoring effects works.

</sarcasm>