Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

This Didn't Have To Happen

22 Post author: Eliezer_Yudkowsky 23 April 2009 07:07PM

My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.

He wasn't signed up for cryonics, of course.  She tried to convince him, and I tried myself a little the one time I met her grandparents.

"This didn't have to happen.  Fucking religion."

That's what my girlfriend said.

I asked her if I could share that with you, and she said yes.

Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.

Comments (183)

Comment author: Vladimir_Nesov 23 April 2009 07:30:05PM 6 points [-]

I doubt religion is a significant cause of not becoming persuaded. The walls of taboo around the subject and the strength of absurdity heuristic seem to me to be about as high in atheists' minds. At least, that's my experience, and it is in harmony with intuition about how to expect the state of affairs to be. Does anyone have any kind of anecdotal data points on that?

Comment author: Eliezer_Yudkowsky 24 April 2009 06:54:05PM 5 points [-]

Well - it is my girlfriend who said it. I think the primary damage done by religion to atheists is the propagation of such things as "No one can possibly know" (which even some atheists unthinkingly repeat), a general tradition of avoiding the subject, an idea that you can say anything you want, and the contamination-by-association of any possible trick for living on after you stop by breathing.

The question you want is: in a world where religion had never existed, but people's reasoning abilities were otherwise mostly the same level, how many people would now be signed up for cryonics? This is the damage done by religion alone.

Comment author: steven0461 24 April 2009 07:03:26PM 3 points [-]

Arguably religion does the most damage by de-legitimizing concerns like immortality and discontinuous world-changing events by surrounding them with a cloud of wishful and otherwise mistaken thinking.

Comment author: thomblake 23 April 2009 07:38:55PM 3 points [-]

Anecdotal? Sure. I'm pretty much an atheist and I'm not signed up for cryonics (and likely never will).

Less-anecdotally, you could compare the amount of atheists and/or non-religious people, to the amount actually signed up for cryonics. Without having the numbers handy, I'd guess that at least shows religion doesn't tell the whole story.

Comment author: JulianMorrison 23 April 2009 07:45:27PM 2 points [-]

Why? Are you the sort of person who refuses to use the save-points in computer games?

Comment author: thomblake 23 April 2009 07:50:59PM 0 points [-]

Right now, there's virtually no evidence that cryonics works. If I wanted to spend money on something not proven to work, I could do it much more cheaply - I bet someone on the street outside would happily sell me an immortality potion for like 5 bucks.

It makes a lot more sense to me to spend my money on things that will make my life better, for reals.

Comment author: Vladimir_Nesov 23 April 2009 07:56:31PM *  5 points [-]

Right now, there's virtually no evidence that cryonics works.

What evidence would you expect if it did work (that is, if it was a true fact that N years in the future the cryonically preserved people will return to life)? What kind of evidence would you accept as sufficient to be persuaded that it works?

Comment author: thomblake 23 April 2009 08:04:07PM 0 points [-]

What kind of evidence would you accept as sufficient to be persuaded that it works?

Probably something like this scenario (I just made up):

Bob signs up for cryonics. Then Bob dies of something. So Bob gets frozen some time later. Then at some point in the future, Bob is brought back to life right as rain.

Basically, the process working ever would be evidence that the process might ever work. Until then, consider me in the 'control group'.

Comment author: Eliezer_Yudkowsky 23 April 2009 08:13:20PM 13 points [-]

I think it was Mike Li who analogized this to refusing to get on an airplane until after it has arrived in France. The whole point of cryonics is as an ambulance ride to the future; once you're in the future, you don't need cryonics any more. I severely, severely doubt that anyone will ever again be frozen after the time a cryonics revival is possible.

Isn't there some gut, intuitive level on which you can see that your objection obviously makes no sense, because conditioning on the proposition that cryonics with present-day vitrification technology does in fact work as an ambulance ride to the future, we still would not expect to see a revival in the present time?

Comment author: thomblake 23 April 2009 08:18:08PM 2 points [-]

I think it was Mike Li who analogized this to refusing to get on an airplane until after it has arrived in France.

I take it more to be like refusing to get on an airplane until any one has arrived anywhere, ever.

For all I know, cryonics makes it harder to revive people. Not that I think it's likely that's the case, but it certainly doesn't seem worth my time and money.

Comment author: JulianMorrison 23 April 2009 08:28:00PM *  10 points [-]

It's like being the guy who checks the Wright brothers' calculations, finds them correct, and still refuses to leap onboard their untried prototype to escape a tiger, but instead prefers to stand and be eaten.

Look, conventional death makes it maximally hard to revive a person. Their information has dissipated. You would essentially need a time machine. Cryonics is a guaranteed improvement over that - at least you have something to work with.

Comment author: thomblake 23 April 2009 08:34:37PM 0 points [-]

It's like being the guy who checks the Wright brothers' calculations, finds them correct,

Perhaps more like the Wright brothers were planning to figure out how to land the plane after they throw it off a cliff. And your example throws out the benefits of not signing up for cryonics, which are a major factor for me.

Comment author: Vladimir_Nesov 23 April 2009 08:28:14PM 1 point [-]
Comment author: thomblake 23 April 2009 08:35:56PM 6 points [-]

Sure - if Einstein signed up for cryonics, I might even follow suit. But a lot of really smart people are signing up for 'heaven', and I'm not listening to them, either.

Comment author: Vladimir_Nesov 23 April 2009 08:24:31PM 1 point [-]

I severely, severely doubt that anyone will ever again be frozen after the time a cryonics revival is possible.

This is too unintuitive an assumption to use in a basic refutation. I doubt it's even true, if revival is performed by non-AGI means, simply because of improved preservation technology, which may well become possible at some point.

Comment author: thomblake 23 April 2009 08:31:01PM 0 points [-]

Agreed. Suppose we simply learn how to revive someone who's frozen first (unlikely, I know). Then, we would selectively freeze/unfreeze people based on the further limitations of medicine at the time (can treat gunshot wounds / can't treat lukemia)

Comment author: Vladimir_Nesov 23 April 2009 08:39:16PM *  0 points [-]

Yes, that's one use case. I'm really not competent to estimate with any certainty how biologically feasible is that, and I assume it's not very feasible. If I remember correctly, the brains of currently preserved, even after vitrification, get cracked during the freezing, so they won't work even if unfrozen, detoxicated, etc. I don't know whether it's possible to find a solution to this problem with anything from the repertoire of current technology.

But the decision concerns the current situation. What do you answer on these questions?

Comment author: Vladimir_Nesov 23 April 2009 08:09:53PM *  4 points [-]

What kind of evidence would it take to convince you that cryonics has a small, but considerable chance of working in the future, prior to there being any successful revivals?

Comment author: thomblake 23 April 2009 10:01:34PM -2 points [-]

I don't like to deal in probabilities, but I'd reckon a successful revival of a dolphin would count. Short of that? Probably nothing, if by 'considerable' you mean 'worth spending my money on'. Things other than evidence might convince me though - like my wife wanting to sign up for cryonics for whatever fool reason.

Comment author: Mulciber 23 April 2009 10:34:00PM 2 points [-]

Does it have to be a dolphin, or would successful revival of a mouse count?

Try not to look up if that's been done before you answer. If you do know, try to imagine whether you'd count it as evidence, if you didn't already know.

Comment author: Vladimir_Nesov 23 April 2009 10:16:09PM *  1 point [-]

I don't like to deal in probabilities, but I'd reckon a successful revival of a dolphin would count.

No, that's out.

Short of that? Probably nothing, if by 'considerable' you mean 'worth spending my money on'.

Yes, I do mean that.

This means, that no matter what you observe, you always estimate the probability of cryonics working as very low, right up to the point where it does succeed (if that ever happens). Which is equivalent to a priori estimating the probability of it working eventually very low also.

Do you believe that progress will never be made, that it will never be possible to revive a very slowly changing frozen body? In 100 years? In 10000 years? Never ever?

Comment author: Jack 24 April 2009 08:23:47PM 0 points [-]

I tend to vacillate on the cryonics debate and for me its beside the point since I really can't afford it as a broke college student (who isn't particularly at risk of dying). But one can certainly imagine better evidence that it would work other than an actual revivification. All sorts of discoveries in cryobiology could provide additional evidence that cryonics will work. Better results freezing and reviving other animals, for example.

Comment author: Vladimir_Nesov 24 April 2009 08:37:16PM *  1 point [-]

Inverting the event, you may say that you are looking for evidence that it will never, ever be possible to revive someone. What sort of evidence will work for that? You are not looking for what is impossible now, you are not looking at what will be impossible for the next 50 years. You are looking for what will never be possible.

I don't see how any details of the progress in technology are in the slightest relevant to that question.

Comment author: Jack 24 April 2009 08:48:42PM *  0 points [-]

That is a good point. But progress matter because there is a non-zero chance that some disaster strikes, or the cryogenics firm dissolves and you never get revived. I also think the farther into the future you get the less interested future people will be in reviving (by comparison) the mentally inferior. Plus I'd much rather wake up sooner than later since I'd rather not be so far behind my new contemporaries. So confidence that revival will be possible sooner than later increases the incentive to pay for the procedure.

Edit- also, the longer revivification technology takes the more likely the chances are for one of alicorn's dystopian scenarios. Plus the far future might be throughly repugnant to the values of the present day, even if it isn't a dystopia.

Comment author: Mulciber 24 April 2009 08:54:59PM 0 points [-]

I also think the farther into the future you get the less interested future people will be in reviving (by comparison) the mentally inferior.

This sounds possible but not at all obvious. It seems to me that so far, interest in historical people and compassion for the mentally inferior have if anything increased over time. This certainly doesn't mean they'll continue to do so out into the far future, but it does mean I'd need some really good reasons to support expecting them to.

Comment author: Jack 24 April 2009 08:59:28PM 0 points [-]

So I can envision future persons wanting to meet some people from the past for historical reasons as you say. But I'm not sure we'd bring back thousands of Homo Habilis if we had the chance. One or two might be interesting- but what would we do with thousands?

Comment author: JulianMorrison 23 April 2009 08:10:26PM 2 points [-]

Vitrification works in organs. Neurons are being simulated in software. Stem cells tech is improving. We already pretty much have the electron-microscope and chemical assay tech to dice, slice, scan and digitize a frozen brain. We don't yet know exactly what to digitize, but neuroscience is a heavily studied field.

The fact of revival isn't here yet, but the peripheral evidence is strong.

Comment author: Vladimir_Nesov 23 April 2009 08:13:46PM *  1 point [-]

Vitrification works in organs. Neurons are being simulated in software. Stem cells tech is improving. We already pretty much have the electron-microscope and chemical assay tech to dice, slice, scan and digitize a frozen brain. We don't yet know exactly what to digitize, but neuroscience is a heavily studied field.

You may be surprised, but none of these arguments significantly move me. I think that damage is too great and complex for such techniques to work for a long time, and when something will finally become up to the task, the particular list of hacks you mention won't be relevant at all.

Comment author: JulianMorrison 23 April 2009 08:17:06PM 1 point [-]

I've seen slides, the earliest ones were really wrecked by ice, but a modern vitrification process is much less destructive. Cryonics is going to be very much LIFO, but the last few in might well be fixable with barely more than hacks.

Comment author: Vladimir_Nesov 23 April 2009 07:43:16PM 0 points [-]

Less-anecdotally, you could compare the amount of atheists and/or non-religious people, to the amount actually signed up for cryonics.

I assume you mean to compare the ratio of atheists among general population to the ratio of atheists among signed up. Won't work very well, as the exposure to the argument is too tilted towards atheists, and it's too hard to correct for that

Comment author: thomblake 23 April 2009 07:46:25PM *  1 point [-]

Nope, I meant compare amount signed up to the amount of atheists (raw numbers). That doesn't tell you whether religion is a factor in avoiding cryonics, but it does tell you whether religion is the only thing keeping everybody from signing up for cryonics. Since by far the majority of atheists are not signed up for cryonics, it's pretty clear that religion isn't what's stopping people.

ETA: Okay, Vladimir_Nesov (below) has convinced me I wasn't considering the same question.

Comment author: Vladimir_Nesov 23 April 2009 07:52:57PM 3 points [-]

Nope, I meant compare amount signed up to the amount of atheists (raw numbers).

That's silly. Too few people know of the idea, and it's too hard to persuade any given person. The question wasn't about absolute difficulty of getting the argument through, but on the relative effect of being religious on the ability of a person to accept the procedure.

Comment author: mattnewport 23 April 2009 09:01:26PM 1 point [-]

I'm an atheist and I'm not currently persuaded by the case for cryonics. I'm unpersuaded purely on a (non-rigorous, informal) cost-benefit analysis. It just seems to me that there are better things to spend my money on. It seems to me that you can make a similar case for being a survivalist - stocking up on guns, ammo and emergency supplies in case of major disaster - and while the argument is sound I just don't judge the expected utility to be worth the outlay. The social stigma is certainly a factor in both cases.

Comment author: Vladimir_Nesov 23 April 2009 09:12:43PM 0 points [-]

t seems to me that you can make a similar case for being a survivalist - stocking up on guns, ammo and emergency supplies in case of major disaster - and while the argument is sound I just don't judge the expected utility to be worth the outlay.

Hmmm... Interesting point, I'm not at all sure how feasible the advantage of having a survivalist hideout is. On the other hand, my position on cryonics pushes the feasibility through the roof, so it's easier to decide.

Comment author: mattnewport 23 April 2009 09:22:07PM 5 points [-]

A lot of the factors you have to consider when deciding the likelihood of being revived with cryonics are the same risk factors you'd consider for maintaining a survivalist hideout but operating in the opposite direction. The more likely you consider economic or social collapse, natural disasters or other societal disruptions which would make a cryonic revival less likely the more value you'd place on survivalist preparations. It's plausible to me that my chances for living long enough to see radical life extension become feasible would be improved by survivalist preparations to a greater extent than expending the same resources on cryonics would improve my chances of being revived at some future date. The relative benefits here would depend on age and other personal factors, though again I'm not claiming to have done a rigorous cost-benefit analysis.

Comment author: Vladimir_Nesov 23 April 2009 09:53:12PM 0 points [-]

Factors may be the same, but the probabilities of success are on the different sides of these factors. Where cryonics succeeds, survivalist hideout is likely unnecessary, but where cryonics fails, survivalist hideout is only useful within the border cases where the society breaks down, but it's still possible to survive. And there, how much does the advance preparation help? Groups of people will still be more powerful and resilient, so I'm not convinced it's of significant benefit.

Comment author: mattnewport 23 April 2009 10:12:07PM 7 points [-]

I think the history of the 20th Century has quite a few examples of situations where society broke down to a large extent within certain regions and yet it was possible to survive (in a world which overall was progressing technologically) for long enough to relocate somewhere safer. Survival in those situations probably depends on luck to quite an extent but survivalist type preparations would likely have increased the chance of survival. The US (where cryonics seems to be most popular) did not really suffer any such situations in the 20th century, with the possible exception of a few natural disasters, but much of Europe and Asia did.

I think the main area where I differ from most cryonics advocates on the probability of it working is in the likelihood of the cryonics institution surviving intact until revival is possible. I think in a future scenario somewhat like WWII in Europe or the cultural revolution in China a cryonics institution would be unlikely to survive but human civilization would as would lucky and/or prepared individuals.

Comment author: JulianMorrison 23 April 2009 09:03:47PM -1 points [-]

How much do you expect it to cost?

Comment author: mattnewport 23 April 2009 09:09:44PM 0 points [-]

At a guess somewhere around a $250,000 value life insurance policy? I don't know how much that costs but somewhere around $2000 a year maybe? I could go and look it up but those are my off the top of my head guesses.

Comment author: pwno 24 April 2009 03:13:01AM 1 point [-]

$120/year*

Comment author: CronoDAS 24 April 2009 02:25:07AM 1 point [-]

The Cryonics Institute does whole-body preservation for $28,000. (I looked it up.)

Comment author: mattnewport 24 April 2009 09:36:23AM 1 point [-]

That is cheaper than I expected. Surprisingly cheap - storage costs must be pretty low if that covers initial preservation and enough funds for the investment return to cover storage in perpetuity.

Comment author: Eliezer_Yudkowsky 24 April 2009 06:55:30PM 2 points [-]

Liquid nitrogen is not very expensive.

Comment author: mattnewport 24 April 2009 07:38:15PM 0 points [-]

Still, that money presumably has to fund storage costs in perpetuity. Assuming some of the money goes to up-front freezing costs, say you have $25,000 in 20 year TIPS yielding a fairly risk free inflation indexed 2.5%, you've got $625 a year to cover storage. That barely pays for a small self-storage unit around here. It's almost suspiciously cheap.

Comment author: Eliezer_Yudkowsky 24 April 2009 08:15:36PM 3 points [-]

Liquid nitrogen is on the order of $80 - which is either the cost per month per cryostat or the cost per customer per year, I don't recall which. The Cryonics Institute owns its own building, and you can keep more than one body in a single cryostat (big cylinder of liquid nitrogen).

The annual fixed costs of cryonics are practically nothing. The costs would decline even further with economies of scale and the scale to invest in better technology. Immortality for everyone in the United States would be a rounding error in the stimulus bill.

Comment author: Psy-Kosh 24 April 2009 08:34:35PM 0 points [-]

For everyone? Well, there'd also be the cost of building the facilities... Anyways, maybe we really should try to push something like that? (Yeah yeah, I know, unlikely.)

Anyways, did you get the PM I sent? (About talking me through some of the specifics of actually signing up?)

Comment author: JulianMorrison 24 April 2009 08:55:27AM *  0 points [-]

Also, don't bother with whole-body preservation. It's useless, because regrowing a body is the least of revival problems, and it's harmful, because your brain spends longer warm while the whole useless hunk of meat attached to it is cooling down. Plus it costs more.

Comment author: pjeby 24 April 2009 07:03:04PM 4 points [-]

Also, don't bother with whole-body preservation. It's useless, because regrowing a body is the least of revival problems,

I'd feel more comfortable with that if we knew more about the extent to which the glial cells around the heart -- not to mention the remainder of the nervous system -- play a role in learning, decisionmaking, emotion etc. I'd hate to lose any non-recoverable data from those systems and have to recreate it, e.g. learning to walk again or being missing emotional reactions, or who knows what else. I think I'd want to keep the "useless hunk of meat" around, just in case, even if it had to be separated from the head for better cooling.

Comment author: orthonormal 24 April 2009 07:09:18PM 8 points [-]

If they did play such an important role in human thought, wouldn't you expect there to be case studies of people who become psychologically impaired after heart surgery (in particular, the installation of an artificial heart)?

Comment author: Lawliet 24 April 2009 09:17:46AM 1 point [-]

CI only offers full-body, but it's cheaper than Alcor's neuro option.

Comment author: stcredzero 23 April 2009 07:58:42PM 1 point [-]

I find that my absurdity heuristic gives a strong signal against. Also, we can't be certain that it will work and we can't be certain how well it will work. This makes it very hard for me to evaluate as an investment. If I can't quantify the payoff or the odds, how can I justify the expense?

Comment author: Vladimir_Nesov 23 April 2009 08:06:36PM 6 points [-]

I find that my absurdity heuristic gives a strong signal against. Also, we can't be certain that it will work and we can't be certain how well it will work.

That's how the absurdity heuristic is supposed to work. But sometimes, it goes hilariously wrong, turning into an absurdity bias. You can't be certain, but you can make estimates.

This makes it very hard for me to evaluate as an investment. If I can't quantify the payoff or the odds, how can I justify the expense?

Every time you decide one way or the other, you make an implicit estimate. If you decide not to invest, you basically state that, given you current knowledge, you judge the investment as not worthwhile. This is not at all the same as "not being able to evaluate". You have to, every time you need to make a decision. What remains is to make sense of your decision, trying to not get it wrong.

Comment author: CronoDAS 24 April 2009 02:03:28AM 8 points [-]

Is it okay to prefer to be an organ donor instead of signing up for cryonics?

Comment deleted 24 April 2009 10:14:28PM [-]
Comment author: robzahra 24 April 2009 11:30:02PM *  2 points [-]

Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions... similarly, whether a single simulated instance of oneself might itself not converge or not provably converge on one utility function as simulated time goes to infinity (seems quite likely; moreover, provable , in a simplified model) etc., etc.
If conclusive work has been done of which I'm unaware, it would be great if someone wants to link to it.
It seems unlikely to me that we can satisfactorily answer these questions without at least a detailed model of our own brains linked to reductionist explanations of what it means to "want" something, etc.

Comment author: simpleton 24 April 2009 04:07:58AM 2 points [-]

This is the only reason I haven't signed up.

What I want to do is sign up for neuropreservation and donate any organs and tissues from the neck down, but as far as I can tell that's not even remotely feasible. Alcor's procedure involves cooling the whole body to 0C and injecting the cryoprotectant before removing the head (and I can understand why perfusion would be a lot easier while the head is still attached). Also, I think it's doubtful that the cryonics team and the transplant team would coordinate with each other effectively, even if there were no technical obstacles.

Comment author: Eliezer_Yudkowsky 24 April 2009 02:05:33AM 2 points [-]

You'd need reliable statistics on the average number of lives saved per organ donor. If it works out to 0.1 then I wouldn't accept that reply, no.

Comment author: CronoDAS 24 April 2009 03:23:00AM 5 points [-]

A Google search gives some hospitals and organizations claiming an average of 3.75 lives saved per organ donor.

Comment author: Jack 24 April 2009 07:52:46PM *  2 points [-]

I imagine this is the case per case of successful recovery. But a lot of people die such that their organs aren't recovered. That obviously needs to be factored in.

**On edit- It occurs to me that a lot of the cases where organs aren't recovered are also cases where cryogenic preservation wouldn't be possible. So I might be wrong about this. Maybe 3.75 is the right number to use.

Can someone think of cases where preservation is possible but organ recovery isn't?**

Comment author: SoullessAutomaton 24 April 2009 09:12:34PM 3 points [-]

Can someone think of cases where preservation is possible but organ recovery isn't?

Elderly patient suffering organ failure due to aging. Death by cancer (not of the brain). Potential donor had HIV or othervery dangerous infectious diseases. Severe abdominal trauma.

Probably other stuff, too.

Comment author: Eliezer_Yudkowsky 24 April 2009 04:23:30AM 1 point [-]

Sounds slightly suspicious. QALYs?

Comment author: orthonormal 24 April 2009 04:17:25AM *  4 points [-]

I'm actually pretty surprised that you haven't looked this up yourself yet. Is there a point of effectiveness at which you would switch to organ donation over cryopreservation?

ETA: Yes, I'm comparing you to a higher standard of rationality, diligence and altruism than I use for others, including myself.

Comment author: Eliezer_Yudkowsky 24 April 2009 04:21:57AM 6 points [-]

Probably not, for two reasons. One, Kantian-type reasoning: Someone has to lead the way through the transition, since the ideal would be enough people cryosuspending that they could just integrate the organ donation protocols into it. Two, and more important, there's a nonzero possibility that someone ends up wanting my brain for something interesting Before It's Over - that I wouldn't literally be out of the game.

Comment author: infotropism 24 April 2009 11:11:55AM 4 points [-]

Do you also, simply, desire to live ?

Or do you mean to say that if your life didn't possess those useful qualities, then it would be better, for you, to forfeit cryonics, and have your organs donated, for instance ?

And I'm actually asking that question to other people here as well, who have altruistic arguments against cryonics. Is there an utility, a value your life has to have, like if you can contribute to something useful, in order to be cryopreserved ? For then that would be for the greatest good for the greatest number of people ?

A value below which, your life would be best not cryopreserved, and your body, used, for organ donations, or something equally destructive to you, but equally beneficial to other people (and certainly more beneficial than whatever value you could create yourself if you were alive) ?

Comment author: [deleted] 08 February 2012 09:29:12PM 1 point [-]

This seems to assume that the probability that someone will be eventually successfully revived given that they have signed up for cryonics is >10%.

Comment author: Annoyance 24 April 2009 04:12:23PM 0 points [-]

Do you have any particular reason to care what lifestyle choices people here consider 'okay'?

Comment author: Eliezer_Yudkowsky 24 April 2009 06:50:11PM 2 points [-]

Do you have any particular reason to suggest that every attempt to ask anyone else for advice makes the requester a conformist?

Comment author: Annoyance 25 April 2009 04:44:35PM 1 point [-]

Not at all, which is precisely why I haven't done that.

CronoDAS wasn't asking for advice. Depending on how his question is interpreted, he was looking for permission / approval.

Comment author: SoullessAutomaton 25 April 2009 05:22:52PM 4 points [-]

Depending on how his question is interpreted, he was looking for permission / approval.

Specifically, I expect that he's looking for community validation of the extremely low value he places on his own life.

Which is actually an interesting question, as I (unfortunately) don't think it's defensible to tell someone "No, your life is worth more than you personally value it at".

Comment author: wuwei 07 June 2009 11:56:14PM 1 point [-]

Some people have times when they are suicidally depressed. I think it's quite defensible to tell those people that their life is worth more than they personally value it at.

More generally, I don't see any strong reasons to expect people to be less mistaken about their own life worth than about any other sort of value judgment.

Also, I don't see any case yet for interpreting CronoDAS as doing anything more than simply asking a community that may have some insight into a given field (rationality), whether his reasoning or conclusions check out.

Comment author: SoullessAutomaton 08 June 2009 12:27:25AM *  1 point [-]

I think it's quite defensible to tell those people that their life is worth more than they personally value it at.

Yes, but valuable to whom? To themselves? That seems contradictory. To others? Sure, but what are you going to do about, tell them they can't do as they please with their life because other people value it more than they do? In some general sense of intrinsic value? That's going to be difficult to define.

Also, I don't see any case yet for interpreting CronoDAS as doing anything more than simply asking a community that may have some insight into a given field (rationality), whether his reasoning or conclusions check out.

This is an old comment so I no longer remember clearly, but he made remarks previously that were strongly indicative of my interpretation. I can possibly dig them up if you really wanted.

Comment author: Normal_Anomaly 20 October 2011 04:37:39PM 0 points [-]

Personally, I'd rather sign up for cryonics. However, if your goal is to maximize the amount and quality of life lived, a plausible case can be made for either cryonics or organ donation. Organ donation will save some number of lives between 0 and maybe a dozen at best, depending on how you die. These lives will likely be elderly people who aren't signed up for cryonics. The money that would have gone to pay for your suspension can also be optimally donated to save some more lives; the most commonly tossed around number is 28 third-world lives vs. a high-quality suspension from Alcor. The benefit of cryonics depends on its chance of working, and on how long and happy your post-revival life would be. A detailed analysis is here. It came out that both options are pretty close, i.e. within the massive error bars of each other.

In conclusion, I'd say either preference is "okay." Go with your conscience.

Comment author: hirvinen 24 April 2009 04:23:16AM 0 points [-]

If memory serves, you've said that your plan is to wait until your parents die and then kill yourself. Even if you do that and donate your organs, you should cryopreserve your head for a chance at waking up in a world you'd want to live in or could better help you with that. It's much worse a strategy than just trying to live to see it, but still better than final death.

Comment author: Nick_Tarleton 24 April 2009 05:14:33AM *  0 points [-]

Are you sure you can undergo neuropreservation while donating your organs (in light of simpleton's comment)? Has it been done?

Comment author: hirvinen 24 April 2009 09:31:16PM 2 points [-]

I don't know of such cases. From http://www.alcor.org/Library/html/neuropreservationfaq.html

"Neuroseparation" is performed by surgical removal of the body below the neck at the level of the sixth cervical vertebra at a temperature near 0ºC. - - The cephalon (head), is then perfused with cryoproectants via the carotid and vertebral arteries prior to deep cooling. For neuropatients cryopreserved before the year 2000, neuroseparation was performed at the end of cryoprotective perfusion via the aorta.

If I understand correctly, at least Alcor's current procedure for neuropreservation would be compatible with removing organs to be donated.

Comment author: simpleton 24 April 2009 11:01:32PM 1 point [-]

Thanks, it looks like I misremembered -- if they're now doing perfusion after neuroseparation then it's much more likely to be compatible with organ donation.

I've sent Alcor a question about this.

Comment author: AndySimpson 24 April 2009 05:00:15AM 3 points [-]

This may be a naïve question, but could someone make or link me to a good case for cryonics?

I know there's a fair probability that we could each be revived in the distant future if we sign up for cryonics, and that is worth the price of admission, but that always struck me as a mis-allocation of resources. Wouldn't it be better, for the time being, if we dispersed all the resources used on cryonics to worthwhile causes like Iodized salt, clean drinking water, or childhood immunization and instead gave up our organs for donation after death? Isn't the cryonics things one big fuzzy, or at least a luxury?

Comment author: mattnewport 24 April 2009 05:11:14AM 1 point [-]

I'd agree that signing up for cryonics and being a traditional utilitarian (valuing all human life equally) aren't really compatible. I'm not a utilitarian so that's not my problem with cryonics but it does seem to be hard to reconcile the two positions. It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

Comment author: AndySimpson 24 April 2009 05:39:54AM 0 points [-]

It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don't think every "western lifestyle" is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can't all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm.

All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.

Comment author: Nick_Tarleton 24 April 2009 02:05:56PM *  3 points [-]

First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild.

Universalizability arguments like this are non-utilitarian; it's the marginal utility of your decision (modulo Newcomblike situations) that matters.

The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars

It definitely seems to me that refraining from these things is so much less valuable than making substantial effective charitable contributions (preferably to existential risk reduction, of course, but still true of e.g. the best aid organizations), probably avoiding factory-farmed meat, and probably other things as well.

Comment author: knb 24 April 2009 06:16:23AM *  1 point [-]

First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild.

Interesting. I'm not certain, but I think this isn't quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn't necessarily decline, it would move around. Consumption really doesn't create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn't completely neglect their own happiness.

All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa.

If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren't outed as a sociopath. Study Anton LaVey.

If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill.

Most people aren't really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.

Comment author: ciphergoth 24 April 2009 07:30:44AM 4 points [-]
Comment author: knb 24 April 2009 03:23:50PM 0 points [-]

Yeah, I heard about this on Bullshit with Penn & Teller. I considered choosing someone else, but Mother Teresa is still the easiest symbol of pure altruism. (That same episode included a smackdown on the Dalai Lama and Ghandi, so my options look pretty weak.

Comment author: Annoyance 24 April 2009 04:06:47PM 0 points [-]

Perhaps you should reconsider the value of 'pure altruism'.

Comment author: thomblake 24 April 2009 05:36:15PM 1 point [-]

Yes, 'pure altruism' is a pretty weak position, and you won't find many proponents of it. Altruism as an ethical position doesn't make any sense; you keep pushing all of your utils on other people, but if you consider a 2-person system doing this, nobody actually gets to keep any of the utils.

Comment author: steven0461 24 April 2009 07:00:39PM 2 points [-]

Agreed, but under certain conditions relating to how much causal influence one has on others vs. oneself, utilitarianism and pure altruism lead to the same prescriptions. (I would argue these conditions are usually satisfied in practice.)

Comment author: Eliezer_Yudkowsky 24 April 2009 06:59:10PM 0 points [-]

Gandhi? Really? My impression is that the "smackdown" on Gandhi is vastly, vastly less forceful than the smackdown on Teresa. Though I haven't watched that particular episode, I've read other critiques that seemed to be reaching as far as possible, and they didn't reach very far.

Comment author: Swimmy 25 April 2009 08:26:00AM 0 points [-]

It mostly had to do with Gandhi being racist.

Comment author: AndySimpson 24 April 2009 08:11:29AM *  3 points [-]

In theory, the westerners would just be sending their money to desperately poor people.

I'm not an economist, and but I think you could model that as a kind of demand. And I don't think I stipulated to there being a transfer of wealth.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

For me, the interesting question is how one goes about choosing "terminal values." I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an "objective" morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?

Comment author: knb 24 April 2009 03:18:47PM *  2 points [-]

How do you choose your terminal values?

Short answer? We don't. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit "I value all human life equally, except I value myself and my children somewhat more.")

But we are not really utilitarians. Our mental architecture doesn't allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.

Comment author: mattnewport 24 April 2009 06:27:40PM *  4 points [-]

It sounds bad for a rationalist to admit "I value all human life equally, except I value myself and my children somewhat more."

Only because that's logically contradictory. If you drop the equally part it sounds fine to me: "I value all human life, but I value some human lives more than others.".

Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).

Comment author: ciphergoth 24 April 2009 06:30:39PM 4 points [-]

I see no good reason to believe it is a good normative theory

How are you going to evaluate a normative theory, except by comparison to another normative theory, or by gut feeling?

Comment author: mattnewport 24 April 2009 06:41:05PM *  1 point [-]

'Gut feeling' is pretty much how I am evaluating it (and is a normative theory in a sense - what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don't see why I should accept what utilitarianism says is good over what I believe is good.

I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.

Comment author: ciphergoth 24 April 2009 09:11:17PM 1 point [-]

I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure.

What do you mean by an "ethical theory" here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?

Since it's clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don't name specific individuals that they are about.

Utilitarian is a normative proposal, not a descriptive theory.

Comment author: thomblake 24 April 2009 05:31:53PM 2 points [-]

But we are not really utilitarians. Our mental architecture doesn't allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.

Shouldn't this be evidence that utilitarianism isn't close to the facts about ethics?

Comment author: Alicorn 24 April 2009 05:37:20PM 3 points [-]

Only if you think we're wired to be ethical.

Comment author: thomblake 24 April 2009 05:41:56PM 0 points [-]

I believe that was part of what knb was saying.

Comment author: SoullessAutomaton 24 April 2009 09:22:49PM 2 points [-]

Shouldn't this be evidence that utilitarianism isn't close to the facts about ethics?

The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It's not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.

Comment author: knb 24 April 2009 03:29:45PM 1 point [-]

I'm not an economist, and but I think you could model that as a kind of demand.

Yes that was my point. I go on to say that aggregate demand would not decrease.

I recommend Eliezer's essay regarding the objective morality of sorting pebbles into correct heaps.

http://www.overcomingbias.com/2008/08/pebblesorting-p.html

Comment author: mattnewport 24 April 2009 08:30:46AM 0 points [-]

I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy).

The extent and nature of that minimal framework is an open question and is what I'm interested in establishing.

Comment author: Jack 24 April 2009 08:10:56PM 1 point [-]

You might be interested in the literature in normative ethics on what is called the overdemandingness problem. In particular, check out Liam Murphy on what he calls the cooperative principle. It takes utilitarianism but establishes a limit set on the amount individuals are required to sacrifice... Murphy's theory sets the limit as that which the individual would be required to sacrifice under full cooperation. So rather than sacrificing all your material wellbeing until giving more would reduce your wellbeing to beneath that of the people you're trying to help you instead need only sacrifice that which would be required of you if the entire western world and non-western elites were doing their part as well.

Comment author: thomblake 24 April 2009 05:39:18PM 1 point [-]

I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy).

You're talking about 'politics', not 'ethics'. Politics is about working together, ethics is about what one has most reason to do or want. What the political rules should say and what I should do are not necessarily going to give me the same answers.

Comment author: mattnewport 24 April 2009 06:07:56PM 1 point [-]

I disagree with your definitions. You seem to be talking about normative ethics - what you 'should' do. I'm more interested in topics that might fall under meta-ethics, descriptive ethics and applied ethics. There is certainly cross-over with politics but there is a lot of other baggage that comes with the word politics that means it's not a word I find useful to talk about the kind of questions I'm interested in here.

Comment author: Vladimir_Nesov 24 April 2009 10:35:43AM *  1 point [-]

Think coordination. Two agents may coordinate their actions, if doing so will benefit both. In this sense, it's cooperation. It doesn't include fighting over preferences, fighting over preferences will just consist in them acting on environment without coordination. But this should never be possible, since the set of coordinated plans is strictly greater than a set of uncoordinated plans, and as a result it should always contain a solution that is a Pareto improvement on the best uncoordinated one, that is at least as good for both players as the best uncoordinated solution. Thus, it's always useful to coordinate your actions will all other agents (and at this point, you also need to dole the benefit of coordination to each side fairly, think Ultimatum game).

Comment author: AndySimpson 24 April 2009 08:57:27AM 0 points [-]

Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I'm firmly in Anton LaVey's corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person's judgement: How are we supposed to choose values?

Comment author: mattnewport 24 April 2009 09:09:17AM *  1 point [-]

It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

Comment author: AndySimpson 24 April 2009 09:39:36AM 0 points [-]

On the whole, we're agreed, but I still don't know how I'm supposed to choose values.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

I think this tactic works best when you're dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you're playing to your base, not trying to grab the center.

Comment author: mattnewport 24 April 2009 05:53:03AM *  0 points [-]

I don't think objectivity is an important feature of ethics. I'm not sure there's such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it's incomplete. If I ever develop a better explanation I might make a top level post.

Comment author: conchis 24 April 2009 08:05:19AM *  0 points [-]

Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of "impossibility theorems" in social choice theory. (Many of these, like Arrow's arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)

Comment author: mattnewport 24 April 2009 08:15:26AM 0 points [-]

Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.

Comment author: AndySimpson 24 April 2009 06:46:43AM *  0 points [-]

Ok, here is what I don't agree with:

Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

I mention objectivity because I don't think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There's little to discuss if you don't, because "everything is permitted." That said, I think ethics has to understand each person's competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more "important" thing isn't agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I'm in substantial agreement with:

Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.

And I would enjoy thoroughly a post on this topic.

Comment author: mattnewport 24 April 2009 07:35:09AM 0 points [-]

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.

There's little to discuss if you don't, because "everything is permitted."

To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is 'wrong' in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the 'freedom' to murder at will. That equilibrium can break down and I'm interested in ways to robustly maintain the 'good' equilibrium rather than the 'bad' equilibrium that has existed at certain times and in certain places in history. I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

Comment author: AndySimpson 24 April 2009 08:38:15AM 2 points [-]

Why do you think it needs to be confronted? ... I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?

Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts.

Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I'm saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.

Comment author: mattnewport 24 April 2009 08:57:10AM *  0 points [-]

What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?

I'm very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don't however see any reason to expect to find or to want to find a more fundamental basis for those preferences.

Our goals are what they are because they were the kind of goals that made our ancestors successful. They're the kind of goals that lead to people like us with just those kind of goals... There doesn't need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.

Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want.

Hopefully we can all agree on that.

Comment author: AndySimpson 24 April 2009 09:32:34AM 0 points [-]

I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.

To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to "objective" morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?

Comment author: mattnewport 24 April 2009 09:54:40AM 0 points [-]

I think we are close.

Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences.

We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also.

I don't really recognize a distinction here. The explanation explains why preferences are their own justification in my view.

Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.

I think I at least partially agree - sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities.

The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized.

This looks like the utilitarian position and is where I would disagree to some extent. I don't believe it's necessary or desirable for individuals to prefer 'aggregated' utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize 'aggregate' utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.

Comment author: thomblake 23 April 2009 07:17:53PM *  3 points [-]

I'm sad that I can't downvote this article. It's ridiculously off-topic.

ETA: still, it's terrible. That's how Douglas Adams died!

Comment author: stcredzero 23 April 2009 08:09:19PM 0 points [-]

It seems almost designed to degenerate into a flame-war concerning cryonics!

Comment author: Mulciber 23 April 2009 09:01:08PM 1 point [-]

Don't worry. I'd guess that posting this comment resulted in other people downvoting the article to compensate.

Which makes me think the karma limit on downvotes doesn't prevent downvotes (among high-karma members) so much as make them something that's done indirectly by posting a comment, rather than clicking "vote down."

Comment author: Alicorn 23 April 2009 10:49:02PM *  2 points [-]

I'm not signed up for cryonics. Partly, this is because I'm poor. Partly, it's because I'm extremely risk-averse and I can imagine really really horrible outcomes of being frozen just as easily as I can imagine really really great outcomes - in the absence of people walking around who were frozen and awakened later, my imaginings are all the data I have.

I'm sorry for your loss and that of your girlfriend, and I wish her grandfather had not died. While I'm at it, I'll wish he'd been immortal. But there are two mistaken responses to the fact that human beings die: one is to tout death as a natural and possibly even positive part of the human condition, and one is to find excuses not to deal with it when it happens. Theism with an afterlife is the first thing; freezing the dead person is the second.

In all likelihood, if and when I stop being poor, my bet and the money behind it is going to be on medicine, and maybe uploads of living people if there are very promising projects going on by then.

Comment author: Lawliet 23 April 2009 11:08:58PM *  4 points [-]

By "extremely risk-averse" do you mean "working hard to maximise persistence odds" or "very scared of scary scenarios"?

You're right that death while signed up for cryonics is still a very bad thing, though. I don't think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.

Comment author: Alicorn 24 April 2009 12:24:46AM 1 point [-]

I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets - even odds, say. And if you make it a $1000 bet versus $2000, I'll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can't tolerate any risk of being out two thousand dollars. (I can't tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no "don't gamble at all" option.)

Comment author: Eliezer_Yudkowsky 24 April 2009 12:35:51AM 2 points [-]

I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing - but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren't inevitable.

Comment author: Alicorn 24 April 2009 12:40:18AM *  0 points [-]

When I spoke of "not dealing with it", I didn't mean to say that you do this with people who die and aren't signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it's easy to categorize them as "maybe not dead" - since if a frozen person weren't maybe-not-dead, no one would be frozen.

Comment author: Eliezer_Yudkowsky 24 April 2009 12:45:29AM 2 points [-]

Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that's a question of neuroscience and the limits of possible revival tech; that's not something which can be possibly settled by observing which answers are comforting or discomforting.

Comment author: Alicorn 24 April 2009 12:51:19AM *  0 points [-]

I'm obviously not being very clear. I'm not making a case that it's irrational to sign up for cryonics - I'm just saying it's not appropriate for someone with a very high risk-aversion, such as myself. I'm informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it's just a personal characteristic. It's quite possible that by making these choices you'll be around, enjoying a great quality of life, in four thousand years, and I won't. That would be awesome for you and less awesome for me. I'm just not willing to take the bet.

Comment author: Mulciber 24 April 2009 02:36:43AM 2 points [-]

Describing this as being averse to risks doesn't make much sense to me. Couldn't a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have "high risk aversion" doesn't really support one side over the other here.

This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be "tempting fate," or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you'll get hit by a bus the next day. Of course, neither side there is very reasonable.

Comment author: Alicorn 24 April 2009 03:27:13AM *  0 points [-]

I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn't factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I've lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer - possibly indefinitely - are ones which could be very extreme, in either direction. I'm not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.

I don't doubt that there are some "nightmare" situations in which I'd prefer cryonics - I'd rather be frozen than spend the next seventy years being tortured, for example - but I don't live in one of those situations.

Comment author: Mulciber 24 April 2009 05:30:17AM 5 points [-]

That's starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn't apply just as well to living for five more years versus fifty? There's more room for extreme positive or negative experiences in the extra 45 years.

Comment author: JulianMorrison 24 April 2009 02:00:06AM 1 point [-]

OK, you're risk averse. Specifically, you're scared. If you put a bit of imaginative effort into it you can play out scenarios of awakening into a dystopia, or botched revival, or abusive uploading, or various nastiness. Fair enough.

I propose that you haven't stretched your imagination far enough.

Staying in doom-n-disaster mode, what are the other ways you could suffer? Illness, madness, brain damage, disability, mistreatment, war, famine, plague, loneliness... it just goes on and on.

Switching to happy mode, what are the good scenarios? Love, long life, wealth and good ideas to use it on... again it goes on and on.

Then if you take all those scenarios, and add a whole lot more of mediocre and tolerable and mildly downbeat ones, and you scatter them out ahead of you into an imaginary branching map of infinite reachable futures. Not all equally easy to reach. There are probability assignments on each, shifting and flowing as your actions and experiences move the chances.

This sort of visualization helps me put my own worrying into perspective. Worrying is a kind of grasping for control, but the future is too big and surprising to be pinned down that way. You can't control what you get. You can steer into a region with more good chances than bad. To do that you have to learn to discount the low chance of bad as just the price of admission.

Comment author: mattnewport 24 April 2009 01:06:11AM 1 point [-]

Partly, it's because I'm extremely risk-averse and I can imagine really really horrible outcomes of being frozen

I'm curious what the really horrible outcomes you can imagine are? That's not something that had ever occurred to me, I can't imagine a worse outcome than not being revived which seems to be equivalent to just being normally dead.

Comment author: Alicorn 24 April 2009 01:08:59AM *  2 points [-]

This is probably symptomatic of reading too much science fiction, but I could be revived by evil aliens, or awakened into a dystopian society that didn't have enough raw materials to make robots and wanted frozen people for cannon fodder, or I could be uploaded instead of outright defrosted and then suffer a glitch that would cause eternal torment/boredom/arithmetic problems, or some form of soul theory could turn out to be right and there could be grandiose metaphysical consequences... I have a very fertile imagination.

Comment author: mattnewport 24 April 2009 04:42:17AM 1 point [-]

Perhaps you have read too much science fiction and not enough history - I worry far more about what is likely to happen between now and when I can expect to die in 30-50 years based on recent history than I do about the essentially unknowable far future.

Comment author: Annoyance 23 April 2009 07:15:38PM 1 point [-]

"Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you."

Is this a warning for or against buying into the idea of cryonics?

Comment author: jhuffman 24 April 2009 07:36:37PM -1 points [-]

I don't really see any commentary on the underlying assumptions here made about the badness of being dead. In summary for a physicalist, being dead has no value: it is a null state. Null states cannot be compared with non-null states, so being dead is not worse than being alive.

To put that another way, I cannot be worse off by being dead because there won't be an I at that point. An argument can be made that I have no personal interest in my being dead - only other living people have a stake in that. That doesn't change the fact that I want to live. There is an I here that wants this, and wants it indefinitely. But once I'm gone, its not a problem for me.

So I tend to favor arguments related to organ-donation since future living people are unlikely to get more benefit from me than current living people in need of organ transplants.

Also, there is a real but small chance that cryo-preservation could lead to a sort of hell - what if I'm only thawed to be a permanent exhibit in a zoo or to be experimented upon or subjected to conversations with classical-language enthusiasts.

So there is a non-zero chance of being consigned to hell if I'm cryo-preserved; whereas once I'm dead its a null state and can be considered an even-break if you really must try and attach a value to it.

Comment author: Mulciber 24 April 2009 08:35:19PM 3 points [-]

It's counterintuitive to say that being dead is basically null value. If I'm choosing between two courses of action, and one difference is that one of them will involve me dying, that's a strong factor in making me prefer the other option.

I can think of possible explanations for this that preserve the claim that being dead has value zero, but I'm not seeing a way that would do so only in non-cryonics cases.

Comment author: jhuffman 25 April 2009 12:47:48AM *  -1 points [-]

Notice the subtle difference in language though. You are talking about dying. Dying is pretty obviously a bad thing. Its only once you are dead that you are in a null state.

Cryo-preservation does not prevent you from dying. You still go through the dying process, and I doubt you are very much comforted by the small chance that you could be revived at some point.

Comment author: Jordan 23 April 2009 10:38:53PM 0 points [-]

Sorry to hear about the loss.

I'm not sure that religion is the main devil here, though. Most of my family isn't religious, nonetheless none of them would ever sign up for cryonics. I focus my efforts on encouraging them to exercise and eat well. I can at least effect some change in that direction.

Comment author: loqi 24 April 2009 03:47:58PM 0 points [-]

Most of my family isn't religious, nonetheless none of them would ever sign up for cryonics.

Not particularly relevant, because the point about religion isn't that all atheists sign up for cryonics. It's that more atheists sign up for it, because delusional afterlife believers perceive no incentive to. I'd bet that a rise in atheism correlates with a rise in cryonics subscription.

Comment author: Jordan 24 April 2009 04:58:57PM 0 points [-]

I imagine so. What I deny is that religion is the main factor preventing the adoption of cryonics. My family isn't proof of this but it's certainly evidence.

If the ratio of atheists who sign up for cryonics as opposed to not is higher than for theists, and if that ratio remained constant as the entire world gave up religion... there still wouldn't be that many people signed up for cryonics.

Comment author: loqi 25 April 2009 03:25:37AM 0 points [-]

That seems at least plausible, but it doesn't refute the harm done by religion (and of course discounts any indirect damage done to atheists' thinking by widespread theism). To counter one anecdote with another: The fact is that most atheists don't know how accessible cryonics is. By mentioning that fact alone (very truly alone, along the lines of "cryonics is actually pretty accessible, google it"), I've peaked the interest of at least two atheists I know.

So in terms of cryonics awareness, I suppose you could make the argument that it's not so much religion itself hindering it, as it is lack of atheist (or rationalist) connectivity. But atheist connectivity is obviously inhibited by the dominance of theism.

Also, since a >1 atheist/theist sign-up ratio would at least point to an "easy" set of people that would sign up in the absence of religion, any increase in that ratio directly opposes the notion that religion isn't preventing adoption. I fully expect this ratio to climb in the near future as full ignorance of cryonics burns itself out.

I'm not confident that religion is the primary factor preventing adoption when plain ignorance seems to be playing such a large role, but it certainly seems non-negligible, especially moving forward.

Comment author: ciphergoth 24 April 2009 05:13:44PM 0 points [-]

Has anyone ever heard of a theist signing up for cryonics? That would seem very odd.

Comment author: Annoyance 24 April 2009 05:28:29PM 3 points [-]

Theists don't necessarily believe in an afterlife. People who believe in an afterlife (whether theists or not) don't necessarily think it will be preferable to this life, either.

Comment author: komponisto 24 April 2009 07:30:38PM 0 points [-]

I don't see why they shouldn't, given that most of them don't refuse (other) medical care.

Comment author: Eliezer_Yudkowsky 24 April 2009 06:48:34PM 0 points [-]

It's been known to happen.

Comment author: mattnewport 24 April 2009 06:52:15PM 3 points [-]

I guess you could make a sort of reverse Pascal's Wager argument for it - if it turns out that there is no immortal soul after all then you've got a backup plan.