Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JoshuaZ 03 October 2011 01:55:54PM 4 points [-]

I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life,

Humans are not totally rational creatures. There are a lot of people who like the idea of cryonics but never sign up until it is very late. This isn't a sign of a lack of "belief"(although Aris correctly notes below that that term isn't well-defined) but rather a question of people simply going through the necessary effort. Many humans have ugh fields around paperwork, or don't want to send strong weirdness signals, or are worried about extreme negative reactions from their family members. Moreover, there's no such thing as "almost" eternal life. 10^30 is about as far from infinity as 1 is. What does however matter is that there are serious problems with the claim that one would get infinite utility from cryonics.

If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I'm mistaken).

There have been some actually extremely tragic cases involving people with serious terminal illnesse such as cancer having to wait until they died (sometimes with additional brain damage as a result). This is because the cryonics organizations are extremely weak and small. They don't want to risk their situation by being caught up in the American euthanasia debate.

What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don't know. Cryopreservation might work. I wouldn't stake my life, or my money on it, and I think there are more important jobs to do first.

This is one of the weakest arguments against cryonics. First of all, some human predictions have been quite accurate. The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.

In looking at any specific technology we can examine it in detail and try to make predictions about when it will function. If you actually think that humans really bad at making predictions, then the you shouldn't just say "we simply don't now" instead you should adjust your prediction to be less confident, closer to 50%. This means that if you assign a low probability to cryonics working you should update towards giving it an increased chance of being successful.

Comment author: dilaudid 03 October 2011 03:32:16PM 0 points [-]

"The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy." - no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don't understand the future is fascinating. Can you explain why I can't use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?

Comment author: ArisKatsaris 03 October 2011 12:51:47PM 3 points [-]

I think only a tiny minority of lesswrong readers, believe in cryopreservation.If people genuinely believed in it then they would not wait until they were dying to preserve themselves

I think you need to define your usage of the term "believe in" slightly better. Belief for what percentages of cryo success rate qualify for "belief in cryopreservation"?

If you're talking about percentages over 90% -- indeed I doubt that a significant number of lesswrong readers would have nearly that much certainty in cryo success.

But for any percentages below that, your arguments become weak to the point of meaningless -- for at that point it becomes reasonable to use cryopreservation as a last resort, and hope for advancements in technology that'll make cryopreservation surer -- while still insuring yourself in case you end up in a position that you don't have the luxury of waiting any more.

Comment author: dilaudid 03 October 2011 02:53:48PM 0 points [-]

Belief is pretty unambiguous - being sure of (100% probability, like cogito ergo sum), or a strong trust (not nearly 90% probability is not belief). So it seems we are in agreement, you don't believe in it, and neither do most less wrong readers. I agree that based on that argument, whether the probability is 10^-1000 or 75%, is still up for debate.

Comment author: dilaudid 03 October 2011 12:07:10PM -2 points [-]

I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life, while by early cryopreservation all they have to lose is their current, finite life, in the "unlikely" event that they are not successfully reanimated. If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I'm mistaken).

Further evidence against this argument is the tiny sums that people are willing to pay. How much would you pay for eternal life? More or less than $8,219 (which is the present value of an annual payment of $300 in perpetuity?). Sounds too cheap to be genuine, too expensive to waste my money on. If I genuinely believed in cryopreservation I would be spending my net worth, which for most americans over 75 years old is > $150k. For less wrong readers, I would guess the median net worth at age 75 would be > $1m.

What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don't know. Cryopreservation might work. I wouldn't stake my life, or my money on it, and I think there are more important jobs to do first.

Comment author: Relsqui 13 October 2010 07:26:29PM 2 points [-]

I'm sure that's true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy--comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can't easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans were. So, yes, I don't doubt that there's tons of wasted human intelligence, but I find that unrelated to the question of AI.

I'm working from the assumption that folks who want FAI expect it to calculate, discover, and reason things which humans alone wouldn't be able to accomplish for hundreds or thousands of years, and which benefit humanity. If that's not the case I'll have to rethink this. :)

Comment author: dilaudid 14 October 2010 12:00:09PM 1 point [-]

I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).

I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.

Comment author: Relsqui 13 October 2010 08:35:43AM 0 points [-]

That depends on what you're trying to accomplish. If you're not using your 200MHz machine because the things you want to work on require at least a gig of processing power, buying the new one might be very productive indeed. This doesn't mean you can't find a good purpose for your existing one, but if your needs are beyond its abilities, it's reasonable to pursue additional resources.

Comment author: dilaudid 13 October 2010 11:14:02AM 0 points [-]

Yeah I can see that applies much better to intelligence than to processing speed - one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell's Outliers (embarrassing source) seems to refute this - his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness - in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.

Comment author: Relsqui 13 October 2010 07:54:40AM *  10 points [-]

I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.

(I agree with your premise, but not your conclusion.)

Comment author: dilaudid 13 October 2010 08:11:34AM *  1 point [-]

To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.

I probably stole from The Economist too.

Comment author: RichardKennaway 13 October 2010 07:43:36AM 3 points [-]

Did you have this in mind? Cognitive Surplus.

Comment author: dilaudid 13 October 2010 07:52:53AM 0 points [-]

Yes - thank you for the cite.

Comment author: dilaudid 13 October 2010 07:40:08AM *  19 points [-]

There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)

Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.

Comment author: pricetheoryeconomist 07 May 2010 04:43:12PM 4 points [-]

I don't see this as a valid criticism, if it intended to be a dismissal. The addendum "beware this temptation" is worth highlighting. While this is a point worth making, the response "but someone would have noticed" is shorthand for "if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out."

Let's say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irrational or bad. Each idea comes as a draw from a bin of ideas, with some proportion that are good and some that are bad.

Further, each person has an imperfect signal on whether or not an idea is good or not. Finally, we only see ideas that people believe are good, setting the stage for sample selection.

Therefore, when someone is propounding an idea, the fact that you have not heard of it before makes it more likely to have been censored--that is, more likely to have been judged a bad idea internally and thus never suggested. I suggest as a bayesian update that, given you have never heard the idea before, it is more likely to be internally inconsistent/irrational/bad than if you hear it constantly, the idea having passed many people's internal consistency checks.

Comment author: dilaudid 15 May 2010 06:13:34PM 1 point [-]

Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".

Comment author: jhuffman 26 January 2010 01:08:07PM *  5 points [-]

For a single individual the cost is much more than $300. Alcor's website says membership is $478 annually, plus another $120 a year if you elect the stand-by option. Also you need $150K worth of life insurance, which will add a bit more.

Peanuts! You say...

I really don't see the point of signing up now, because I really don't see how you can avoid losing all the information in your mind to autolysis unless you get a standby or at least a very quick (within an hour or two) vitrification. That means I have to be in the right place, at the right time when I die and I simply don't think thats likely now - when any death I experience would almost certainly be sudden and it would be hours and hours before I'm vitrified.

I mean, if I get a disease and have some warning then sure I'll consider a move to Phoenix and pay them their $20k surcharge (about a lifetime's worth of dues anyway) and pay for the procedure in cash up-front. There is no reason for me to put money into dues now when the net present value of those payments exceeds the surcharge they charge if you are a "last minute" patient.

I understand this isn't an option if you don't have at least that much liquidity but since I happen to do so then it makes sense to me to keep it all (and future payments) under my control.

Hopefully that decision is a long time from now and I'll be more optimistic about the whole business at that time. I'll also have better picture of my overall financial outlook and whether I'd rather spend that money on my children's future than my doubtful one.

In response to comment by jhuffman on Normal Cryonics
Comment author: dilaudid 01 February 2010 12:52:39PM 2 points [-]

jhuffman's point made me think of the following devil's advocacy: If someone is very confident of cryonics, say more than 99% confident, then they should have themselves preserved before death. They should really have themselves preserved immediately - otherwise there is a higher risk that they will die in a way that causes the destruction of their mind, than there is that cryonics will fail. The amount that they will be willing to pay would also be irrelevant - they won't need the money until after they are preserved. I appreciate that there are probably laws against preserving healthy adults, so this is strictly a thought experiment.

As people get older their risk of death or brain damage increases. This means that as someone gets older the confidence level at which they should seek early preservation will decrease. Also as someone gets older their expected "natural" survival time decreases, by definition. This means the payoff for not seeking early preservation is reducing all the time. This seems to bring some force to the argument - if there is a 10% probability that cryonics will succeed, then I really can't see why anyone would let themselves get within 6 years of likely death - they are putting a second lifetime at risk for 6 years of less and less healthy life.

Finally the confidence level relates to cost. If people can be shown to have a low level of confidence in cryonics, then their willingness to pay money should be lower. The figures I've seen quoted require a sum of $150,000. (Whether this is paid in life insurance or not is irrelevant - you must pay for it in the premium since, if you're going to keep the insurance until you die, the probability of the insurer paying out is 100%). If the probability of Cryonics working is 10%, then the average cost for a successful re-animation is $1.5 million. This is a pretty conservative cost I think - doubtless for some who read this blog it is small change. Not for me sadly though :)

View more: Next