(OR)

How I'm now on the fence about whether to sign up for cryonics

I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid. 

My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on. 

(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)

I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)  

However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying. 

When queried, my brain tells me that it's doing an expected-value calculation and the expected value of cryonics to me is is too low to justify the costs; it's unlikely to succeed and the only reason some people have positive expected value for it is that they're multiplying that tiny number by the huge, huge number that they place on the value of my life. And my number doesn't feel big enough to outweigh those odds at that price. 

Putting some numbers in that

If my brain thinks this is a matter of expected-value calculations, I ought to do one. With actual numbers, even if they're made-up, and actual multiplication.

So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value. Through a variety of helpful thought experiments (how much would I pay to cure a fatal illness if I were the only person in the world with it and research wouldn't help anyone but me and I could otherwise donate the money to EA charities; does the awesomeness of 3 million dewormings outway the suckiness of my death; is my death more or less sucky than the destruction of a high-end MRI machine), I've converged on a subjective value for my life of about $1 million. Like, give or take a lot. 

Cryonics feels unlikely to work for me. I think the basic principle is sound, but if someone were to tell me that cryonics had been shown to work for a human, I would be surprised. That's not a number, though, so I took the final result of Steve Harris' calculations here (inspired by the Sagan-Drake equation). His optimistic number is a 0.15 chance of success, or 1 in 7; his pessimistic number is 0.0023, or less than 1/400. My brain thinks 15% is too high and 0.23% sounds reasonable, but I'll use his numbers for upper and lower bounds. 

I started out trying to calculate the expected cost by some convoluted method where I was going to estimate my expected chance of dying each year and repeatedly subtract it from one and multiply by the amount I'd pay each year to calculate how much I could expect pay in total. Benquo pointed out to me that calculation like this are usually done using perpetuities, or PV calculations, so I made one in Excel and plugged in some numbers, approximating the Alcor annual membership fee as $600. Assuming my own discount rate is somewhere between 2% and 5%, I ran two calculations with those numbers. For 2%, the total expected, time-discounted cost would be $30,000; for a 5% discount rate, $12,000.

Excel also lets you do calculations on perpetuities that aren't perpetual, so I plugged in 62 years, the time by which I'll have a 50% chance of dying according to this actuarial table. It didn't change the final results much; $11,417 for a 5% discount rate and $21,000 for the 2% discount rate. 

That's not including the life insurance payout you need to pay for the actual freezing. So, life insurance premiums. Benquo's plan is five years of $2200 a year and then nothing from then on, which apparently isn't uncommon among plans for young healthy people. I could probably get something as good or better; I'm younger. So, $11,00 for total life insurance premiums. If I went with permanent annual payment, I could do a perpetuity calculation instead. 

In short: around $40,000 total, rounding up.

What's my final number?

There are two numbers I can output. When I started this article, one of them seemed like the obvious end product, so I calculated that. When I went back to finish this article days later, I walked through all the calculations again while writing the actual paragraphs, did what seemed obvious, ended up with a different number, and realized I'd calculated a different thing. So I'm not sure which one is right, although I suspect they're symmetrical. 

If I multiply the value of my life by the success chance of cryonics, I get a number that represents (I think) the monetary value of cryonics to me, given my factual beliefs and values. It would go up if the value of my life to me went up, or if the chances of cryonics succeeding went up. I can compare it directly to the actual cost of cryonics.

I take $1 million and plug in either 0.15 or 0.00023, and I get $150,000 as an upper bound and $2300 as a lower bound, to compare to a total cost somewhere in the ballpark of $40,000.

If I take the price of cryonics and divide it by the chance of success (because if I sign up, I'm optimistically paying for 100 worlds of which I survive in 15, or pessimistically paying for 10,000 worlds in which I survive in 23), I get the total expected cost per my life being saved, which I can compare to the figure I place on the value of my life. It goes down if the cost of cryonics goes down or the chances of success go up. 

I plug in my numbers and get a lower bound of $267,000 and an upper bound of 17 million. 

In both those cases, the optimistic success estimates make it seem worthwhile and the pessimistic success estimates don't, and my personal estimate of cryonics succeeding falls closer to pessimism. But it's close. It's a lot closer than I thought it would be. 

Updating somewhat in favour that I'll end up signed up for cryonics. 

Fine-tuning and next steps

I could get better numbers for the value of my life to me. It's kind of squicky to think about, but that's a bad reason. I could ask other people about their numbers and compare what they're accomplishing in their lives to my own life. I could do more thought experiments to better acquaint my brain with how much value $1 million actually is, because scope insensitivity. I could do upper and lower bounds.

I could include the cost of organizations cheaper than Alcor as a lower bound; the info is all here and the calculation wouldn't be too nasty but I have work in 7 hours and need to get to bed. 

I could do my own version of the cryonics success equation, plugging in my own estimates. (Although I suspect this data is less informed and less valuable than what's already there).

I could ask what other people think. Thus, write this post. 

 

New to LessWrong?

New Comment
252 comments, sorted by Click to highlight new comments since: Today at 12:23 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Cryonics has a more serious problem which I seldom see addressed. I've noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth - yet they seem totally unconcerned about the fact that we just don't see this alleged trend happening in cryonics technology, despite its numerous inadequacies. In fact, Mike Darwin argues that the quality of cryopreservations has probably regressed since the 1980's.

In other words, attempting the cryogenic preservation of the human brain in a way which makes sense to neuroscientists, which should become the real focus of the cryonics movement, has a set of solvable, or at least describable, problems which current techniques could go a long way towards solving without having to invoke speculative future technologies or friendly AI's. Yet these problems have gone unsolved for decades, and not for the lack of financial resources. Just look at some wealthy cryonicists' plans to waste $100 million or more building that ridiculous Timeship (a.k.a. the Saulsoleum) in Comfort Texas.

What brought about this situation? I've made myself unpopular by ... (read more)

James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader)

I would be grateful if you would tell me what the logical problem is.

3Icehawk7810y
Presumably, the implication is that these predictions are not based on facts, but had their bottom line written first, and then everything else added later. [I make no endorsement in support or rejection of this being a valid conclusion, having given it very little personal thought, but this being the issue that advancedatheist was implying seems fairly obvious to me.]
2James_Miller10y
Thanks, if this is true I request advancedatheist explain why he thinks I did this.
0Icehawk7810y
I can't say on behalf of advancedatheist, but others who I've heard make similar statements generally seem to base them on a manner of factor analysis; namely, assuming that you're evaluating a statement by a self-proclaimed transhumanist predicting the future development of some technology that currently does not exist, the factor which best predicts what date that technology will be predicted as is the current age of the predictor. As I've not read much transhumanist writing, I have no real way to evaluate whether this is an accurate analysis, or simply cherry picking examples of the most egregious/popularly published examples (I frequently see Kurzweil and... mostly just Kurzeil, really, popping up when I've heard this argument before). [As an aside, I just now, after finishing this comment, made the connection that you're the author that he cited as the example, rather than just a random commenter, so I'd assume you're much more familiar with the topic at hand than me.]
5Swimmer963 (Miranda Dixon-Luinenburg) 10y
The problem of people compartmentalizing between what they think is valuable and what they ought to be working on is pretty universal. That being said, it does make cryonics less likely to succeed, and thus worth less; it's just a failure mode that might be hard to solve.
5Vaniver10y
I believe I've seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.

As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.

Originally, I hadn't seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.

It wasn't until I discovered Eliezer's writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.

The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly A... (read more)

1JoshuaZ9y
Which should be fine; an increase in spectator cryonauts is fine as long as it isn't stealing from the pool of active cryonauts. Since in this case it is making people who wouldn't have anything to do with cryonics be involved, it is still a good thing.
0Jonathan Paulson10y
No one is working on cryonics because there's no money/interest because no one is signed up for cryonics. Probably the "easiest" way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better. Some rich patron funding it all sounds good, but I can't think of a recent example where one person funded a significant R&D advance in any field.
2taryneast10y
"but I can't think of a recent example where one person funded a significant R&D advance in any field." Christopher Reeve funds research into curing spinal cord injury Terry Pratchett funds research into Alzheimer's I'm sure there are others.
4Jonathan Paulson10y
Pratchett's donation appears to account for 1.5 months of the British funding towards Alzheimer's (numbers from http://web.archive.org/web/20080415210729/http://www.alzheimers-research.org.uk/news/article.php?type=News&archive=0&id=205, math from me) . Which is great and all, but public funding is way better. So I stand by my claim.
2taryneast10y
Ok, I stand corrected re: Pratchett. How did you come by the numbers? and can you research Reeve's impact too? Until then, you've still "heard of one recent example" :)
-3[anonymous]10y
applause. If there actually existed a cryopreservation technique that had been proven to really work in animal models - or better yet in human volunteers! - I would go ahead and sign up. But it doesn't exist, and instead of telling me who's working on making it exist, people tell me about the chances of successful revival using existing techniques. I could say the same thing to the FAI effort. Actually, no, I am saying the same thing. Everyone seems to believe that too few people are committed to FAI research, but very few step up to actually volunteer their own efforts, even on a part-time basis, despite much of it still being in the realm of pure mathematics or ethics where you need little more than a good brain, some paper, pens, and lots of spare time to make a possible contribution. Nu? If everyone has a problem and no-one is doing anything about it... why?

It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there's no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.

Of course, this is not a new argument, and there are a few standard responses to it. The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general. And it's certainly true that I do actually value my own life more than I value the life of a random stranger, but I do that because I'm human and can't avoid it, not because my values would endorse that as a maximally broad rule. If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I'll take it.

The other standard argument is that cryonics doesn't need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it r... (read more)

I've had thoughts along similar lines. But it seems like there's a "be consistent about your selfishness" principle at work here. In particular, if...

  • ...you are generally willing to spend $X / month for something that has a significant chance of bringing you a very large benefit, like saving your life...
  • ...where $X /month is the cost of being signed up for cryonics (organization membership + life insurance)...
  • ... and you think cryonics has a significant chance of working...

It seems kind of inconsistent to not be signed up for cryonics.

(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of "me".)

Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer '2 + 2 = 5' on a test or sometimes giving '2 + 2 = 5' and other times ' 2 + 2 = 4', the latter is probably preferable. Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.

If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it's certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.

9Kaj_Sotala10y
I endorse this summary.
2Kaj_Sotala10y
While I don't think that there's anything wrong with preferring to be consistent about one's selfishness, I think it's just that: a preference. The common argument seems to be that you should be consistent about your preferences because that way you'll maximize your expected utility. But that's tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I've written a partial post about this, which I can try to finish if people are interested.) For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I'm not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as "preferences", while the egoist faction is more motivated by just doing whatever feels good at the moment and isn't that interested in the long-term consequences.
8Alejandro110y
Another way of putting this: If you divide the things you do between "selfish" and "altruistic" things, then it seems to make sense to sign up for cryonics as an efficient part of the "selfish" component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between "Near mode decisions" and "Far mode decisions". Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.
4James_Miller10y
A huge number of people save money for a retirement that won't start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.
2Alejandro110y
That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn't, then I suspect many of those who currently do it wouldn't do it for similar reasons to those why they don't sign up for cryonics.
2Jiro10y
I suspect most people's reasons for not signing up for cryonics amount to "I don't think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal's Mugging." I don't see how that would apply to retirement--would people in such a society seriously think they have only a very small chance of surviving until retirement age?
-2[anonymous]10y
0.23% is not a significant chance.
[-][anonymous]10y100

(Disclaimer: I absolutely promise that I am not evil.)

The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general.

Question: why the hell not? My brain processed this kind of question for the first time around fourth grade, when wanting special privileges to go on a field trip with the other kids despite having gotten in trouble. The answer I came up with then is the one I still use now: "why me? Because of Kant's Categorical Imperative" (that is, I didn't want to live in a world where nobody went on the field trip, therefore I should get to go on it -- though this wasn't exactly clear thinking regarding the problem I really had at the time!). I would not want to live in a world where everyone kept their own and everyone else's lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!

I also do give to charity, do pay my taxes, and do support state-run social-welfare programs. So I'm not advocating total selfishness... (read more)

2Kaj_Sotala10y
I think that the argument you're going for here (though I'm not entirely sure, so do correct me if I'm misinterpeting you) is "if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic". And I agree! I think every altruist should take care of themselves first - for various reasons, including the one you mentioned, and also the fact that miserable people aren't usually very effective at helping others, and because you can inspire more people to become altruistic if they see that it's possible to have an awesome time while being an altruist. But of course, "I should invest in myself because having an awesome life lets me help me others more effectively" is still completely compatible with the claim of "I shouldn't place more intrinsic value on others than in myself". It just means you're not being short-sighted about it.
2[anonymous]10y
More like, "If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn't follow it in the first place." Or, put simply, "Your supposed all-altruism is self-contradictory in the limit." Hence my having to put a disclaimer saying I'm not evil, since that's one of the most evil-villain-y statements I've ever made. Of course, there are complications. For one thing, most people don't have the self-destructive messiah complex necessary for total altruism, so you can't apply first-level superrationality (ie: the Categorical Imperative) as including everyone. What I do endorse doing is acting with a high-enough level of altruism to make up for the people who don't act with any altruism while also engaging in some delta of actual non-superrational altruism. How to figure out what level of altruistic action that implies, I have no idea. But I think it's better to be honest about the logically necessary level of selfishness than to pretend you're being totally altruistic but rationalize reasons to take care of yourself anyway.
3Kaj_Sotala10y
Sorry, I don't follow. If the logical result of accepting full misery to oneself would be everyone being miserable, why wouldn't the altruists just reason this out and not accept full misery to themselves? "Valuing everyone the same as yourself" doesn't mean you'd have to let others treat you any way they like, it just means you'd in principle be ready for it, if it was necessary. (I think we're just debating semantics rather than disagreeing now, do you agree?)
4[anonymous]10y
I think we have slightly different values, but are coming to identical practical conclusions, so we're agreeing violently. EDIT: Besides, I totally get warm fuzzies from being nice to people, so it's not like I don't have a "selfish" motivation towards a higher level of altruism, anyway. SWEAR I'M NOT EVIL.
2Kaj_Sotala10y
You said you'd prefer everyone to live awesome lives, I'm not sure how that could be construed as evil. :)
2[anonymous]10y
Serious answer: Even if I don't endorse it, I do feel a pang of guilt/envy/low-status at being less than 100% a self-impoverishing Effective Altruist, which has been coming out as an urge to declare myself not-evil, even by comparison. Joke answer: eyes flash white, sips tea. SOON.
3Kaj_Sotala10y
Okay, in that case you should stop feeling those negative emotions right now. :) Nobody here is a 100% self-impoverishing EA, and we ended up agreeing that it wouldn't even be a useful goal to have, so go indulge yourself in something not-at-all-useful-nor-altruistic and do feel good about it. :)
0TheOtherDave10y
How confident of this are we? I mean, there are many tasks which can lead to my happiness. If I perform a large subset of those tasks for my own benefit, they lead to a certain happiness-level for me... call that H1. If I perform a small subset of those tasks for everyone's benefit, they lead to a different happiness-level, H2, for everyone including me. H2 is, of course, much lower than H1... in fact, H2 is indistinguishable from zero, really, unless I'm some kind of superstar. (I'm not aggregating across people, here, I'm just measuring how happy I am personally.) So far, so good. But if everyone else is also performing a small subset of those tasks for everyone's benefit, then my happiness is N*H2. H2 is negligible, but N is large. Is (N*H2) > H1? I really have no idea. On the face of it, it seems implausible. On the other hand, comparative advantage is a powerful force. We've discovered that when it comes to producing goods and services, for example, having one person performing a single task for everyone does much better than having everyone do everything for themselves. Perhaps the same is true for producing happiness? Which is not necessarily an argument for altruism in the real world, but in this hypothetical world where everyone acts with maximal altruism, maybe the end result is everyone is having a much more awesome life... they're simply having it thanks to the efforts of a huge community, rather than entirely due to their own efforts. Then again, that sounds like a pretty good description of the real world I live in, also.
5Swimmer963 (Miranda Dixon-Luinenburg) 10y
I think this is why it feels squicky trying to assign a monetary value to my life; part of me thinks it's selfish to assign any more value to my life than Givewell's stated cost to save a stranger's life ($1700-ish??) But I know I value it more than that. I wouldn't risk my life for a paycheck.
[-][anonymous]10y120

I wouldn't risk my life for a paycheck.

Do you drive to work?

2Swimmer963 (Miranda Dixon-Luinenburg) 10y
I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident. And the risk is very likely less than 1/1000 given the years that I've been biking accident-free, so there's a multiplication there.
4Lumifer10y
I rather suspect it depends primarily on where you bike. Biking through streets of Manhattan has different risk than biking on rural Wyoming roads.
0[anonymous]10y
Driving under the same conditions has similar risk disparity.
2Lumifer10y
I rather doubt that -- do you have data?
2Nornagest10y
I seem to remember the answer being that cycling is more dangerous per mile than driving, but that the increase in physical fitness more than compensates in all-cause mortality terms. The first paper I found seems to point to the same conclusion. I don't know how that would be adjusted in someone that already has fitness habits. It probably also depends on how well developed the cycling infrastructure in your town is, but I've never seen any actual data on that either.
2Lethalmud10y
In my experience bicycling is much safer. I have been cycling more or less everyday since I was at least since I was 8. and have never been in a life-threatening accident. however, while traveling by car, I have been in 2 or 3 potential life threatening crashes. But this will be very dependent of location culture and personal variables.
-3[anonymous]10y
Do you know of a safer way to commute that lets you keep the same range of possible jobs?
8hyporational10y
If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?
-2[anonymous]10y
Depends: has the treatment been proven to work before? (Yes, I've heard the probability calculations. I don't make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)
3Vulture10y
Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions. (I think. People who know what they're talking about, feel free to correct me)
-3[anonymous]10y
You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities. Now, given that cryonics has not been seen to work on, say, rats, I don't see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me. Of course, then there's the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they're doing with you. Generally, every answer I've heard to that question made my skin crawl.
5James_Miller10y
I bet you would. Lots of jobs have components (such as extra stress, less physical activity, or living in a dangerous or dirty city) that reduce life expediency. Unless you pick the job which maximizes your life span, you would effectively be risking your life for a paycheck. Tradeoffs are impossible to escape, even if you don't explicitly think about them.

In context, it seems uncharitable to read "risk my life" to include any risk small enough that taking it would still be consistent with valuing one's own life far above $1700.

1MugaSofer10y
Remember, your life has instrumental value others don't; if you risk your life for a paycheck, you're risking all future paychecks as well as your own life-value. The same applies to stressing yourself out obsessively working multiple jobs, robbing banks, selling your redundant organs ... even simply attempting to spend all your money on charity and the cheapest of foods tends too be a fairly bad suggestion for the average human (although if you think you can pull it off, great!)
4poiuyt10y
I imagine that a lot of people on Less Wrong get off on having someone tell them "with a strong tone of moral obligation" that death can be defeated and that they simply must invest their money in securing their own immortality. Even if it isn't a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they'll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.
7[anonymous]10y
Just asking, were you trying to make that sound awful and smug? Because that honestly sounds like a future I don't want to wake up in. I want to wake up in the future where people have genuine compassion for the past, and are happy to welcome the "formerly dead" to a grand new life, hopefully even including their friends and loved ones who also made it successfully to "the Future". If the post-cryonic psychological counsellors of the future woke me up with, "Congratulations, you made the right business decision!", then I would infer that things had gone horribly wrong.
7Paul Crowley10y
Lost in the wilderness, I think we should go North; you, South. If I find help, but learn that you died, my first thought will not be "neener neener told you so".
0Brillyant10y
Interesting... Is is possible cryonic wakers might be treated very poorly? Perhaps stigmatized? I'm very ignorant of what all is involved in either "end" of cryonics, but what if, say, the cost of resurrecting the frozen person is prohibitively high and future people lobby to stop their waking up? And even the ones who do wake up are treated like pariahs? It might play out like the immigaration situation in the US: A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants. I can already hear the arguments now... "They won't know if we don't wake them up. Besides every one we wake costs us X resources which damages Y lives by Z%."
2Jiro10y
How is that any different from saying "a nation, founded by slaveowners, that is now composed of a big chunk of citizens who hate slaveowners"? Certainly the fact that your ancestors benefited from being slaveowners is no reason why you should support slaveowners now.
-1poiuyt10y
Yep. While genuine compassion is probably the ideal emotion for a post-cryonic counselor to actually show, it's the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now. There's nothing wrong with that: "Everyone who said I was stupid is wrong and gets forced to admit it." is probably one of the top ten most common fantasies and there's nothing wrong with spending your leisure budget on indulging a fantasy. Especially if it has real world benefits too.
1[anonymous]10y
That's... actually kinda sad, and I think I'm going to go feed my brain some warm fuzzies to counter it. Trying to live forever out of spite instead of living well in the here and now that's available? Silly humans.
2Eliezer Yudkowsky10y
Don't worry, poiuyt is making all of this up. I don't personally know of anyone to whom this imaginary scenario applies. The most common sentiment about cryonics is "God dammit I have to stop procrastinating", hence the enjoinders are welcome; as for their origin point, well, have you read HPMOR up to Ch. 96?
6poiuyt10y
I feel that I am being misunderstood: I do not suggest that people sign up for cryonics out of spite. I imagine that almost everyone signed up for cryonics does so because they actually believe it will work. That is as it should be. I am only pointing out that being told that I am stupid for signing up for cryonics is disheartening. Even if it is not a rational argument against cryonics, the disapproval of others still affects me. I know this because my friends and family make it a point to regularly inform me of the fact that cryonics is "a cult", that I am being "scammed out of my money" by Alcor and that even if it did work, I am "evil and wrong" for wanting it. Being told those things fills me with doubts and saps my willpower. Hearing someone on the pro-cryonics side of things reminding me of my reasons for signing up is reassuring. It restores the willpower I lose hearing those around me insulting my belief. Hearing that cryonics is good and I am good for signing up isn't evidence that cryonics will work. Hearing that non-cryonicists will "regret" their choice certainly isn't evidence that cryonics is the most effective way to save lives. But it is what I need to hear in order to not cave in to peer pressure and cancel my policy. I get my beliefs from the evidence, but I'll take my motivation from wherever I can find it.
3[anonymous]10y
Eliezer, I have been a frequent and enthusiastic participant on /r/hpmor for years before I decided to buck up and make a LessWrong account. I don't recall someone answering my question in the other place I posted it, so I might as well ask you (since you would know): provided I am unwilling to believe current cryonic techniques actually work (even given a Friendly superintelligence that wants to bring people back), where can I be putting money towards other means of preserving people or life-extension in general? Gwern had a posting once on something called "brain plastination", which supposedly works "better" in some sense than freezing in liquid nitrogen, even though that still relies on em'ing you to bring you back, which frankly I find frightening as all hell. Is there active research into that? Into improved cryonics techniques? Or should I just donate to anti-aging research on grounds that keeping people alive and healthy for longer before they die is a safer bet than, you know, finding ways to preserve the dead such that they can be brought back to life later?
5somervta10y
The Brain Preservation Foundation may be what you're looking for.
0blacktrance10y
There's good and bad spite. Good spite is something like, "They call me mad! But I was right all along. Muahahaha!" and feeling proud and happy that you made the right choice despite opposition from others. Bad spite is something like, "I was right and they were wrong, and now they're suffering for their mistakes. Serves them right". One is accomplishment, the other is schadenfreude.
1Kawoomba10y
Yes, it is a great psychological coping mechanism. Death is such a deeply personal topic that it would be folly to assume fuzzies, or the avoidance of frighties, didn't factor in. However, such is the case with any measure or intervention explicitly relating to lifespan extension. So while extra guarding against motivated cognition is in order when dealing with one's personal future non-existence and the postponing thereof, saying "you're doing it because of the warm fuzzies!" isn't sufficient rejection of death escapism. The cryonics buyer may well answer "well, yes, that, and also, you know, the whole 'potential future reanimation' part". You still have to engage with the object level.
2Richard_Kennaway10y
Should a monk who has taken vows have a sin budget, because the flesh is weak? You seem conflicted, believing you should not value your own life over others', but continuing to do so; then justifying yielding to temptation on the grounds that you are tempted. Of course it is. Has it ever been presented as anything else, as "Escape death so you can do more for other people"? Support for cryonics is for the sake of everyone, but signing up to it is for oneself alone.

Should a monk who has taken vows have a sin budget, because the flesh is weak?

If that helps them achieve their vows overall.

I did try valuing the lives of others equally before. It only succeeded in making me feel miserable and preventing me from getting any good done. Tried that approach, doesn't work. Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.

Of course it is. Has it ever been presented as anything else

Once people start saying things like "It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now" or "If you don't sign up your kids for cryonics then you are a lousy parent", it's hard to avoid reading a moral tone into them.

3Richard_Kennaway10y
The opportunity for self-serving application of this principle casts a shadow over all applications. I believe this hypothetical monk's spiritual guide would have little truck with such excuses, rest and food, both in strict moderation, being all the body requires. (I have recently been reading the Sayings of the Desert Fathers and St John Climacus' "Ladder of Divine Ascent", works from the first few centuries of Christianity, and the rigours of the lives described there are quite extraordinary.) "It's not me that wants this, it's this other thing I share this body with." Personally, that sounds to me like thinking gone wrong, whether you yield to or suppress this imaginary person. You appear to be identifying with the altruist faction when you write all this, but is that really the altruist faction speaking, or just the egoist faction pretending not to be? Recognising a conflict should be a first step towards resolving it. These are moral arguments for supporting cryonics, rather than for signing up oneself. BTW, if it's sinfully self-indulgent to sign up oneself, how can you persuade anyone else to? Does a monk preach "eat, drink, and be merry"? Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?
2Kaj_Sotala10y
[…] I'm confused over what exactly your position is. The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism, whereas the second excerpt seems like a criticism of that position.
5Richard_Kennaway10y
My position is that (1) the universal practice of valuing oneself over others is right and proper (and I expect others to rightly and properly value themselves over me, it being up to me to earn any above-baseline favour I may receive), (2) there is room for discussion about what base level of compassion one should have towards distant strangers (I certainly don't put it at zero), and (3) I take the injunction to love one's neighbour as oneself as a corrective to a too low level of (2) rather than as a literal requirement, a practical rule of thumb for debiasing rather than a moral axiom. Perfect altruism is not even what I would want to want. I'm drawing out what I see as the implications of holding (which I don't) that we ought to be perfectly altruistic, while finding (as I do) that in practice it is impossible. It leads, as you have found, to uneasy compromises guiltily taken.
3Kaj_Sotala10y
I did say right in my original comment (emphasis added):
1TheAncientGeek10y
I will attempt a resolution: other people are as imortant as me, in pirncipal, since I am not objectively anything special -- but I should concentrate my efforts on myself and those close to me, becuase I understand my and their needs better, and can therefore be more effective.
[-][anonymous]10y110

I don't think that's a sufficient or effective compromise. If I'm given a choice between saving the life of my child, or the lives of a 1000 other children, I will always save my child. And I will only feel guilt to the extent that I was unable to come up with a 3rd option that saves everybody.

I don't do it for some indirect reason such as that I understand my children's needs better or such. I do it because I value my own child's life more, plain and simple.

3[anonymous]10y
Important to whom?
0TheAncientGeek10y
You might as well have asked: special to whom>? Even if there is no objective importance or specialiness anywhere, it still follows that I have no objective importance ort specialness.
0MugaSofer10y
For the record, you do have a limited supply of willpower. I'm guessing those monks either had extraordinary willpower reserves or nonstandard worldviews that made abstinence actually easier than sin.
5hyporational10y
It seems they practice that willpower muscle very explicitly for hours every day. Abstinence should actually be pretty easy considering you have very little else to drain your willpower with.
0Richard_Kennaway10y
If you think so.
5MugaSofer10y
Looking into your link now, but it was my understanding that the effect was weaker if the participant didn't believe in it, not nonexistent (i.e. disbelieving in ego depletion has a placebo effect.) Wikipedia, Font Of All Knowledge concurrs: ETA: It seems the Wikipedia citation is to a replication attempt of your link. They found the effect was real, but it only lessened ego depletion - subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.
2EHeller10y
I'm not sure the word "placebo" makes sense when you are discussing purely psychological phenomena. Obviously any effects will be related to psychology- its not like they gave them a pill.
0MugaSofer10y
I ... think it's supposed to be regulated at least partially by glucose levels? So in some of the experiments, they were giving them sugar pills, or sugar water or something? I'm afraid this isn't actually my field :( But of course, no phenomenon is purely psychological (unless the patient is a ghost.) For example, I expect antidepressant medication is susceptible to the placebo effect.
0Kaj_Sotala10y
See here.
0Richard_Kennaway10y
If it isn't, you're doing something wrong. ETA: By which I don't mean that it is easy to do it right. Practicing anything involves a lot of doing it wrong while learning to do it right.
1Wes_W10y
It seems to me that, even valuing your own life and the lives of others equally, it's not necessarily inconsistent to pay much more for cryonics than it would cost to save a life by normal altruist means. Cryonics could save your life, and malaria nets could save somebody else's life, but these two life-savings are not equal. If you're willing to pay more to save a 5-year-old than an 85-year-old, then for some possible values of cryonics effectiveness, expectation of life quality post-resuscitation, and actual cost ratios, shutting up and multiplying could still favor cryonics. If this argument carries, it would also mean that you should be spending money on buying cryonics for other people, in preference to any other form of altruism. But in practice, you might have a hard time finding people who would be willing to sign up for cryonics and aren't already willing/able to pay for it themselves, so you'd probably have to default back to regular altruism. If you do have opportunities to buy cryonics for other people, and you value all lives equally, then you've still got the problem of whether you should sign yourself up rather than somebody else. But multiplying doesn't say you can't save yourself first there, just that you have no obligation to do so.
1Kawoomba10y
Edit: Since you don't in terms of your revealed preferences, are you aspiring to actually reach such a state? Would an equal valuation of your life versus a random other life (say, in terms of QALYs) be a desirable Schelling point, or is "more altruistic" always preferable even at that point (race to the bottom)?
6Kaj_Sotala10y
Depends on which part of my brain you ask. The altruistic faction does aspire to it, but the purely egoist faction doesn't want to be eradicated, and is (at least currently) powerful enough to block attempts to eradicate it entirely. The altruist faction is also not completely united, as different parts of my brain have differing opinions on which ethical system is best, so e.g. my positive utilitarian and deontological groups might join the egoist faction in blocking moves that led to the installation of values that were purely negative utilitarian.
0A1987dM10y
Have you read the second paragraph of the comment you're replying to?
2Kawoomba10y
Clarified in grandparent.
1hyporational10y
I don't understand this values vs preferred values thing. It sounds like "if I get a chance to go against my actual values in favor of some fictional values, I'll take it" which seems like a painful strategy. If you get to change your values in some direction permanently, it might work and I would understand why you'd want to change your cognition so that altruism felt better, to make your values more consistent.
4Rob Bensinger10y
This isn't really different than any other situation where people wish they had a different characteristic than they do. Sometimes such preferences are healthy and benign in the case of other mental states, e.g., preferring to acquire more accurate beliefs. I don't see any reason to think they can't be healthy and benign in the case of preferring to change one's preferences (e.g., to make them more form a more consistent system, or to subordinate them to reflective and long-term preferences). As I noted to Chris above, consistency isn't necessarily the highest goal here. The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.
0hyporational10y
I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn't relevant in the case of tricking yourself to do radical altruism, if your future self won't value it more than your current self. This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
4Rob Bensinger10y
I don't see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn't to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it's not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn't to feel more empathy toward strangers. The goal is to help strangers suffer and die less. Suppose you see a child drowning in your neighbor's pool, and you can save the child without incurring risk. But, a twist: You have a fear of water. Kaj and I aren't saying: If you're completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that's your precondition for an interesting or compelling moral argument, then you're bound to be disappointed. Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren't averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don't care at all about your aversion to bodies of water in other contexts (e.g., you aren't pining to join any swim teams). For the same reason, it can make sense to wish that you weren't selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
0hyporational10y
Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self. What I'm saying is that if the child was random and I had a high risk of dying when trying to save them then there's no argument that would make me take that risk although I'm probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I'd like to get rid of that. It seems to me more like you're saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else. I really don't understand. Either you are that selfish, or you aren't. I'm that selfish, but also happily donate money. There's no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
3Rob Bensinger10y
No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don't want to take a pill that will turn me into a murderer who loves that he's a murderer, even though if I took such a pill I'd be happy I did. If your risk of dying is high enough, then you shouldn't try to save the child, since if you're sure to die the expected value may well be negative. Still, I don't see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. 'My altruism is limited, and I'm perfectly OK with how limited it is and wouldn't take a pill to become more altruistic if one were freely available' is a coherent position, though it's not one I happen to find myself in. Then you understand the thing you were confused about initially: "I don't understand this values vs preferred values thing." Whether you call hydrophobia a 'value' or not, it's clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think. Well, of course you should; when I say the word 'should', I'm building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn't murder people. If you're inclined to murder people, then it's unlikely that my saying 'please don't do that, it would be a breach of your moral obligations' is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of 'bad' (and of 'kill', and of 'to'...). And it's bad to strongly desire to kill people; and it's bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don't themselves adhere to the moral system be
0hyporational10y
You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren't in perfect alignment with yours. Because of that paragraph, it's pretty difficult for me to look at anything else you said rationally. Here's some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.
0Rob Bensinger10y
I'm not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I'm really more into the Socratic approach than the Crusader approach.) I'm making two points about the structure of utilitarian reasoning: 1. 'It's better for people to have preferences that cause them to do better things.' is nearly a tautology for consequentialists, because the goodness of things that aren't intrinsically good is always a function of their effects. It's not a bold or interesting claim; I could equally well have said 'it's good for polar bears to have preferences that cause them to do good things'. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad. 2. 'Should' can't be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things. Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are. I'll also say (equally non-facetiously): I don't endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren't exactly 100% the person you wish you were. Those aren't good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don't want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them. It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don't have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.
0hyporational10y
Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don't understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism. I don't think I voluntarily do bad things according to my values, ever. I also don't understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past. Other people do bad things according to my values, but if their actions are truly voluntary and I can't point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind. No. To me that seems like you're ignoring what's normally persuasive to people out of plain stubbornness. The reason I'm bringing this up is because I have altruistic goals too, and I find such talk damaging to them. Having ideals is fine if you make it absolutely clear that's all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don't make or break a good person.
2Rob Bensinger10y
I'm not conflating the two. There are non-utilitarian moral consequentialisms. I'm not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call 'morality'. But that's just a terminological issue. If an egoist did choose to adopt moral terminology like 'ought' and 'good', and to cash those terms out using egoism, then the egoist would agree with my claim ''It's better for people to have preferences that cause them to do better things.' But the egoist would mean by that 'It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy', whereas what I mean by the sentence is something more like 'It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents'. Interesting! Then your usage of 'bad' is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc. Sorry, I don't think I was clear about why I drew this comparison. 'Murder' just means 'bad killing'. It's trivial to say that murder is bad. I was saying that it's nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn't think anyone would think I was honestly saying something almost unsurpassably silly. What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse 'you should feel horribly guilty and hate yourself if you haven't 100% maximized your impact'? Or should we drop the idea that maximization i
2blacktrance10y
Egoism is usually not the claim that everyone should act in the egoist's self-interest, but that everyone should act in their own self-interest, i.e. "It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy".
0Rob Bensinger10y
That's true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say 'I endorse egoism' in that sense I'm really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former 'goal' is the truer one, in that it's the one that actually guides my actions to the extent I'm a 'good' egoist; the latter goal is a weird hanger-on that doesn't seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don't come into conflict that often, but it's clear which one is more important when they do. This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
0blacktrance10y
The two goals don't conflict, or, more precisely, (2) isn't a goal, it's a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one's own desires. It's similar to how in the prisoner's dilemma, each prisoner wants the other to cooperate, but doesn't believe that the other prisoner should cooperate.
0Rob Bensinger10y
I think it depends on what's meant by 'correct decision rule'. Suppose I came up to you and said that intuitionistic mathematics is 'correct', and conventional mathematics is 'incorrect'; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else's goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong? I don't think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don't know what it means to add to that that the other player 'shouldn't cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there's no normative demand that it do so. I don't think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn't conclude that this was a bad or wrong or 'incorrect' thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
0blacktrance10y
Sorry, I don't know much about the philosophy of mathematics, so your analogy goes over my head. It means that it is optimal for the other player to defect, from the other player's point of view, if they're following the same decision rule that you're following. Given that you've endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn't use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn't have.
0hyporational10y
It seems various things are meant by egoism. Begins with "Egoism can be a descriptive or a normative position."
1Lumifer10y
It's also a common attack term :-/
0hyporational10y
I better stop using it. In fact, I better stop using any label for my value system.
0hyporational10y
I'd have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You're right it's a terminology issue and difficult one at that. Disclaimer: I use "pleasure" as an umbrella term for various forms of experiential goodness. Say there's some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can't call myself an egoist, then I'm at a loss here. Perhaps "egoism" is a reputation hit anyway and I should ditch the word, huh? Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I'm making much more money than I use, and I'm looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing. Most people don't do much introspection, so I would expect that. However you saying this surprises me, since I didn't expect to be unusual in this crowd. These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except "normative progress" I don't understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with. This brings up another terminological problem. See, I totally understand I better use the word "bad" in a way that other people understand me, but if I used it while I'm describing my own decision process, that would lead me to scold myself
2Kaj_Sotala10y
In Yvain's liking/wanting/endorsing categorization, "preferred values" corresponds to any values that I approve of. Another way of saying it would be that there are modules in my brain which execute one set of behaviors, whereas another set of modules would prefer to be engaging in some other set of behaviors. Not really different from any situation where you end up doing something that you think that you shouldn't.
1blacktrance10y
If you approve of these values, why don't you practice them? It seems to me that approving of a value means you want others to practice it, regardless of whether you want it for yourself.
1Kaj_Sotala10y
Did I say I don't? I'm not signed up for cryonics, for instance.
0blacktrance10y
I mean valuing people equally.
0Kaj_Sotala10y
Yes, that's what my above comment was a reference to. I do my best to practice it as well as I can.
0hyporational10y
It seems to me you're looking for temporal consistency. My problem understanding you stems from the fact that I don't expect my future self to wish I had been any more altruistic than I'm right now. I don't think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain's descriptions fit this picture.
1Kaj_Sotala10y
I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it's more a "I'm doing this thing even though I know it's wrong" thing: not a conflict between one's current and future self, but rather a conflict between the good of myself and the good of others.
0hyporational10y
Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don't actually want to do. Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I'm doing. Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body. Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can't expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain. I can think of several reasons why I'd want to assume a conflicted identity and almost all of them involve signalling and social convenience.
2Kaj_Sotala10y
I think the difference here is that, from the inside, it often doesn't feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I'd have been paralyzed with inaction. But when I'm considering some decision that I'm conflicted over, it very literally feels like there's an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn't resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.) Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn't really resolved in any sense - if the circumstances were to change and I'd have to make the same decision again, the loser of this "round" might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all. That's also why it doesn't seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated "wanting" with "the desire of the brain-faction that happens to be the strongest at the time when one's brain is sampled", then you could kind of frame it like a temporal conflict... but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it's just the relative strength of those wants that varies.
0hyporational10y
Ok. To me it most often feels like I'm observing that some parts of my brain struggle and that I'm there to tip the scales, so to speak. This doesn't necessarily lead to a desirable outcome if my influence isn't strong enough. I can't say I feel conflicted about in what direction to tip the scales, but I assume this is just because I'm identifying with a part of my brain that can't monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don't remember what it felt like, nor would I be able to tell you how this transformation might have happened. This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you're X that's trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn't want to do?
2Kaj_Sotala10y
Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience: 1. I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision. 2. I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle. 3. I could tip the scale if I just decided what direction I wanted to tip them in, but I'm genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the "carriers" of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn't feel quite right, but it's the best I can manage right now.) I now realize that I hadn't previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I'll have to rethink what I've said here in light of that. I think that I identify with each brain-faction that has managed to "install" "its" preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don't identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.
0[anonymous]10y
It means different "modules" of your mind have different values, and on reflection you favor one module over the other. Part of why this still sounds problematic is that we have a hard time unravelling the "superego" (the metaphorical mental module responsible for enforcing nonselfish/pro-social values) from full and complete moral cognition. Thus, many people believe they believe they should be selfless to the point of self-sacrificing, even though, if you cloned them and actually made the clone that selfless, they would not endorse the clone as being a superior version of themselves.
0passive_fist10y
I don't remember any non-crazy cryonics advocate ever saying otherwise.
0lsparrish10y
I think the scale on which it is done is the main thing here. Currently, cryonics is performed so infrequently that there isn't much infrastructure for it. So it is still fairly expensive compared to the amount of expected utility -- probably close to the value implied by regulatory tradeoffs ($5 million per life). On a large, industrial scale I expect it to be far better value than anything Givewell is going to find.
0Calvin10y
This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period. If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).
2lsparrish10y
At some point, you will have to specialize in cryobiology and neuroscience (with some information science in there too) in order to process the data. I can understand wanting to see the data for yourself, but expecting everyone to process it rationally and in depth before they get on board isn't necessarily realistic for a large movement. Brian Wowk has written a lot of good papers on the challenges and mechanisms of cryopreservation, including cryoprotectant toxicity. Definitely worth reading up on. Even if you don't decide to be pro-cryonics, you could use a lot of the information to support something related, like cryopreservation of organs. Until you have enough information to know, with very high confidence, that information-theoretic death has happened in the best cases, you can't really assign it all a $0 value in advance. You could perhaps assign a lower value than the cost of the project, but you would have to have enough information to do so justifiably. Ignorance cuts both ways here, and cryonics has traditionally been presented as an exercise in decision-making under conditions of uncertainty. I don't see a reason that logic would change if there are millions of patients under consideration. (Although it does imply more people with an interest in resolving the question one way or another, if possible.) I don't quite agree that the value would be zero if it failed. It would probably displace various end-of-life medical and funeral options that are net-harmful, reduce religious fundamentalism, and increase investment in reanimation-relevant science (regenerative medicine, programmable nanodevices, etc). It would be interesting to see a comprehensive analysis of the positive and negative effects of cryonics becoming more popular. More organs for transplantation could be one effect worth accounting for, since it does not seem likely that we will need our original organs for reanimation. There would certainly be more pressure towards assisted suicide, so th
0[anonymous]10y
This just shifts the question to whether promoting cryonics is an effective form of general consequentialist do-gooding. There are a lot of factors to consider, in regards to large-scale cryonics: 1. Effects on funding/enthusiasm for new technologies due to alignment of incentives. 2. Effects on mitigation of existential risks, long-term economic policies, and investments. 3. How much cheaper it gets when practiced on a large industrial scale. 4. How much more reliable it becomes when practiced on a large industrial scale. 5. Displacement of wasteful funeral practices. 6. Displacement of wasteful end-of-life medical practices. 7. Reduced religious fundamentalism, due to less belief in innate immortality. 8. Reduced luxury purchases due to altered time preferences. 9. Relative number of people who could be saved by cryonics but not by any other available technology. There are some plausible negatives effects to consider as well: * A larger industry has more opportunities for corruption and mistakes, so it would probably be more regulated on a larger scale, resulting in higher administrative costs and restrictions on experimentation. * People might be less concerned with preventing some health problems (while being more concerned with others, including traffic fatalities and heart disease) as the result of risk compensation. * The pressure to cure diseases in the short term could be reduced. Some patients with terminal cases might decide to die earlier than they otherwise would (which would turn out to be permanent if cryonics fails to work for them). However, the costs aren't likely to outweigh (or even significantly approach) the savings and benefits in my estimation. In many cases the apparent negatives (e.g. people checking out early, or reducing the overt pressure on scientists to cure cancer ASAP) could be a blessing in disguise (less suffering, less bad data). The regulation aspect probably actually benefits from cryonics being a larger and more

As I mentioned in a private message to Hallquist, I favor a wait and see approach to cryonics.

This is based on a couple observations:

  1. There is an excellent chance that when (if?) I die, it will be either (1) in a way which gives me enough advance warning so that I have time to sign up for cryonics; or (2) it will sufficiently sudden that even if I had been signed up for cryonics it wouldn't have made a difference.
  1. It's not too hard to get cash out of a life insurance policy if you are terminally ill.

So it seems there isn't a huge downside to simply... (read more)

Initially I wanted to mention that there is one more factor: the odds of being effectively cryopreserved upon dying. I.e. being in a hospital amenable to cryonics and with a cryo team standing by, with enough of your brain intact to keep your identity. This excludes most accidental deaths, massive stroke, etc. However, the CDC data for the US http://www.cdc.gov/nchs/fastats/deaths.htm show that currently over 85% of all deaths appear to be cryo-compatible:

  • Number of deaths: 2,468,435
  • Death rate: 799.5 deaths per 100,000 population
  • Life expectancy: 7
... (read more)
6roystgnr10y
What percent of young people's deaths are cryo-compatible? Hypothetically, if most 70-80 year old people who die are in a hospital bed weeks after a terminal diagnosis, but most 30-40 year old people who die are in a wrecked car with paramedics far away, it might make sense for a 34 year old on the fence to forgo the cryonics membership and extra life insurance now but save the money he would have spent on premiums to sign up later in life.

I have a view on this that I didn't find by quickly skimming the replies here. Apologies if it's been hashed to death elsewhere.

I simply can't get the numbers to add up when it comes to cryonics.

Let's assume a probability of 1 of cryonics working and the resulting expected lifespan to be until the sun goes out. That would equal a net gain of around 4 billion years or so. Now, investing the same amount of money in life extension research and getting, say a 25% chance of gaining a modest increase in lifespan of 10 years for everyone would equal 70bn/4 = 17... (read more)

How do you invest $50,000 to get a 25% chance of increasing everyone's lifespan by 10 years? John Schloendorn himself couldn't do that on $50K.

Reviewing the numbers you made up for sanity is an important part of making decisions after making up numbers.

6Fossegrimen10y
You're right. Those numbers weren't just slightly coloured by hindsight bias but thoroughly coated in several layers of metallic paint and polished. They need to be adjusted drastically down. The reasons I originally considered them to be reasonable are: * The field of cancer research seem to be a lot like software in the 80s in that our technical ability to produce new treatments is increasing faster than the actual number of treatments produced. This means that any money thrown at small groups of people with a garage and a good idea is almost certain to yield good results. (I still think this and am investing accordingly) * I have made one such investment which turned out to be a significant contribution in developing a treatment for prostate cancer that gives most patients about 10 extra years. * There are far too many similarities between cancer cells and ageing cells for me to readily accept that it is a coincidence. This means that investing in cancer research startups has the added bonus of a tiny but non-zero chance of someone solving the entire problem as a side effect. * In retrospect, I also went: "prostate cancer patients == many => everyone == many" (I know, scope insensitivity :( ) On the other hand, my numbers for cryonics were also absurdly optimistic, so I'm not yet convinced that the qualitative point I was trying (ineptly) to make is invalid. The point was: Even a large chance of extending one life by a lot should be outweighed by a smaller chance of extending a lot of lives by a little, especially if the difference in total expected number of years is significant. Also: Thanks for the pushback. I am far too used to spending time with people who accept whatever I say at face value and the feeling I get on here of being the dumbest person in the room is very welcome.
0[anonymous]10y
The numbers are indeed optimistic, but they are based on empirical evidence: More conservatively, but still vastly optimistic, suppose $50k has a 1% chance of creating a remedy for a long-term remission (say, 10 extra QALY) in a lethal disease which strikes 1% of the population and that almost every sufferer can get the cure. This reduces the total expected years gained down to some 2 million, which is still nothing to sneeze at.
6TheOtherDave10y
This math only works if I value a year of someone else's life approximately the same as a year of my life. If instead I value a year of someone else's life (on average), say, a tenth as much as I value a year of my own life, then if I use your numbers to compare the EV of cryonics at 4 GDY (giga-Dave-years) to the EV of life-extension research at 1.75 GDY, I conclude that cryonics is a better deal. Approached the other way... if I don't value any given life significantly more than any other, there's no particular reason for me to sign up for cryonics or research life extension. Sure, currently living people will die, but other people will be alive, so the total number of life-years is more or less the same either way... which is what, in this hypothetical, I actually care about. The important thing in that hypothetical is increasing the carrying capacity of the environment, so the population can be maximized. It turns out to matter what we value.
0Fossegrimen10y
Your first point is of course valid. My algorithm for determining value of a life is probably a bit different from yours because I end up with a very different result. I determine the value of a life in the following manner: Value = Current contribution to making this ball of rock a better place + (Quality of life + Unrealised potential) * Number of remaining years. If we consider extended life spans, the first element of that equation is dwarfed by the rest so we can consider that to be zero for the purpose of this discussion. Quality of life involves a lot of parameters, and many are worth improving for a lot of people. Low hanging fruit includes: Water supply and sanitation in low-income countries, local pollution in the same countries, easily treatable diseases, Women's lib. All of these are in my opinion worthy alternatives to cryonics, but maybe not relevant for this particular discussion. The remaining parameter is Unrealised Potential which I think of as (Intelligence * conscientiousness). I am brighter than most, but more lazy than many, so the result, if interpreted generously, is that I may be worth somewhat more than the median but certainly not by a factor of 10, so if we still go with the numbers above (even if Eliezer pointed out that they were crazy), my stance is still that cryonics is a poor investment. (It may be fun but not necessarily productive to come up with some better numbers.) Also: I have absolutely no problem accepting that other people have different algorithms and priors for determining value of life, I am just explaining mine. Your other point was more of a surprise and I have spent a significant amount of time considering it and doing rudimentary research on the subject, because it seems like a very valid point. The main problem is that it does not seem that the total number of high quality life-years is limited by the carrying capacity of the planet, especially if we accept women's lib as a core requirement for attaining high
2TheOtherDave10y
Sure. And it not seeming that way is a reason to lower our confidence that the hypothetical I described actually characterizes our values in the real world. Which is no surprise, really.
0TheOtherDave10y
You're welcome. You might also find that actually thinking through your arguments in the absence of pushback is a good habit to train. For example, how did you arrive at your formula for the value of life? If someone were to push back on it for you, how would you support it? If you were going to challenge it, what would be its weakest spots?
0Fossegrimen10y
Actually, that last bit was an entirely new thought to me, thanks

The obvious assumption to question is this:

Given that cryonics succeeds, is what you purchase really equal to what you purchase by saving yourself from a life-threatening disease? You say that you don't place an extremely high value on your own life, but is it the case that the extra life you purchase with cryonics (takes place in the far-future*, is likely significantly longer) than the extra life you are purchasing in your visualization (likely near-future, maybe shorter [presumable 62 years?]). Relevant considerations:

The length difference depends on h... (read more)

1A1987dM10y
Not necessarily -- it's possible that an uFAI would revive cryo patients.
5[anonymous]10y
Why? Dead humans turn to paperclips much easier than live ones, and the point in the design space where an AI wants to deliberately torture people or deliberately wirehead people is still much, much harder to hit than the point where it doesn't care at all.
2A1987dM10y
I'm thinking of cases where the programmers tried to write a FAI but they did something slightly wrong. I agree an AI created with no friendliness considerations in mind at all would be very unlikely to revive people.
1[anonymous]10y
I'm having trouble coming up with a realistic model of what that would look like. I'm also wondering why aspiring FAI designers didn't bother to test-run their utility function before actually "running" it in a real optimization process.
0Kaj_Sotala10y
Have you read Failed Utopia #4-2?
0[anonymous]10y
I have, but it's running with the dramatic-but-unrealistic "genie model" of AI, in which you could simply command the machine, "Be a Friendly AI!" or "Be the CEV of humanity!", and it would do it. In real life, verbal descriptions are mere shorthand for actual mental structures, and porting the necessary mental structures for even the slightest act of direct normativity over from one mind-architecture to another is (I believe) actually harder than just using some form of indirect normativity. (That doesn't mean any form of indirect normativity will work rightly, but it does mean that Evil Genie AI is a generalization from fictional evidence.) Hence my saying I have trouble coming up with a realistic model.
0Lumifer10y
Because if you don't construct a FAI but only construct a seed out of which a FAI will build itself, it's not obvious that you'll have the ability to do test runs.
0[anonymous]10y
Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?
-1Eugine_Nier10y
I believe that is MIRI's stated purpose.
3[anonymous]10y
Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion. Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there's really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.
0MugaSofer10y
Perhaps it had implications that only became clear to a superintelligence?
3[anonymous]10y
Hmmm... Upon thinking it over in my spare brain-cycles for a few hours, I'd say the most likely failure mode of an attempted FAI is to extrapolate from the wrong valuation machinery in humans. For instance, you could end up with a world full of things people want and like, but don't approve. You would thus end up having a lot of fun while simultaneously knowing that everything about it is all wrong and it's never, ever going to stop. Of course, that's just one cell in a 2^3-cell grid, and that's assuming Yvain's model of human motivations is accurate enough that FAI designers actually tried to use it, and then hit a very wrong square out of 8 possible squares. Within that model, I'd say "approving" is what we're calling the motivational system that imposes moral limits on our behavior, so I would say if you manage to combine wanting and/or liking with a definite +approving, you've got a solid shot at something people would consider moral. Ideally, I'd say Friendliness should shoot for +liking/+approving while letting wanting vary. That is, an AI should do things people both like and approve of without regard to whether those people would actually feel motivated enough to do them.
1Eliezer Yudkowsky10y
Are we totally sure this is not what utopia initially feels like from the inside? Because I have to say, that sentence sounded kinda attractive for a second.
2MugaSofer10y
What kinds of wierdtopias are you imagining that would fulfill those criteria? Because the ones that first sprung to mind for me (this might make an interesting exercise for people, actually) were all emphatically, well, wrong. Bad. Unethical. Evil... could you give some examples?
0TheOtherDave10y
I of course don't speak for EY, but what I would mean if I made a similar comment would hinge on expecting my experience of "I know that everything about this is all wrong" to correlate with anything that's radically different from what I was expecting and am accustomed to, whether or not they are bad, unethical, or evil, and even if I would endorse it (on sufficient reflection) more than any alternatives. Given that I expect my ideal utopia to be radically different from what I was expecting and am accustomed to (because, really, how likely is the opposite?), I should therefore expect to react that way to it initially.
0MugaSofer10y
Although I don't usually include a description of the various models of the other speaker I'm juggling during conversation, that's my current best guess. However, principle of charity and so forth. (Plus Eliezer is very good at coming up with wierdtopias - probably better than I am.)
2[anonymous]10y
It's what an ill-designed "utopia" might feel like. Note the link to Yvain's posting: I'm referring to a "utopia" that basically consists of enforced heroin usage, or its equivalent. Surely you can come up with better things to do than that in five minutes' thinking.

I'd probably sign up if I were a US citizen. This makes me wonder if it's rational to stay in Finland. Has there been any fruitful discussion on this factor here before? Promoting cryonics in my home country doesn't seem like a great career move.

5Viliam_Bur10y
Try promoting rationality instead. If you succeed, then maybe someone else will take care about cryonics And even if they don't, you still did something good.
2A1987dM10y
Well, Finland already is the country with the most LWers per capita (as of the 2012 survey). :-)
3Viliam_Bur10y
Now the question is whether having more LWers makes it easier or harder to recruit new ones. If the model is "only certain % of population is the LW type", then it should be harder, because the low-hanging fruit is already picked. If the model is "rationality is a learned skill", then it should be easier, because the existing group can provide better support. I already think Finland is a very smart country (school system, healthy lifestyle), so if it's the latter model, your local rationalist group should have a great chance to expand. It's probably important how many of the 15 Finnish LWers live near each other.
0hyporational10y
If CFAR becomes a success and Finland starts to develop its own branch, I'll probably donate some money, but working there myself would be like buying fuzzies in a soup kitchen with my inferior cooking skills. Some other kinds of relevant local movements might also get my vocal and monetary support in the future. At this point marketing our brand of rationality to anyone I don't know seems like a risky bet. They might get exposed to the wrong kinds of material the wrong time and that wouldn't mean anything good to my reputation.

For me, there's another factor: I have children.

I do value my own life. But I also value the lives of my children (and, by extension, their descendants).

So the calculation I look at is that I have $X, which I can spend either to obtain a particular chance of extending/improving my own life OR I can spend it to obtain a improvements in the lives of my children (by spending it on their education, passing it to them in my will, etc).

0Swimmer963 (Miranda Dixon-Luinenburg) 10y
Excellent point. This isn't a consideration for me right now, but I expect there will be in the future.
[-][anonymous]10y30

.

[This comment is no longer endorsed by its author]Reply
2Gunnar_Zarncke10y
But the value of your life in comparison to other persons lifes doesn't change by this. You'd have to inflation-adjust the value of other persons lifes accordingly. Only if you are not valuing other persons lifes can you get away with this, but the OP made sufficiently clear that this wasn't the case.
1Decius10y
It's reasonable to believe that the area under the curve with "QALYs of life" on the X axis and "Probability of having a life this good or better, given cryonics" is finite, even if there is no upper bound on lifespan. Given a chance of dying each year that has a lower bound, due to accident, murder, or existential hazard, I think that it is provable that the total expected lifetime is finite. You make a good point that the expected lifetime of a successfully revived cryonicist might be more valuable than the life of someone who didn't sign up.

I suppose I belong to that group that would like to see more people signing up for cryonics but have not done so myself. For myself, I am young and expect to live quite a while longer. I expect the chance of dying without warning in a way that could be cryopreserved to be rather low, whereas if I had much warning I could decide then to be cryopreserved (so the loss is my chance of losing consciousness and then dying in a hospital without regaining consciousness). I currently am not signed up for life insurance, which would also mean the costs of cryopreser... (read more)

[-][anonymous]10y20

(The following assumes that you don't actually want to die. My honest assessment is that I think you might want to die. I don't believe there's anything actually wrong with just living out your life as it comes and then dying, even if living longer might be nicer, and particularly when living longer might totally suck. So don't assume I'm passing judgement on a particular decision to sign up or not; in fact, the thought that life might suck forever drives me damn near favoring suicide myself.)

Let's tackle the question from another angle.

I do not believe... (read more)

I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.

While I'm definitely desensitized to suffering of others, seeing dead and dying people has made my own mortality all the more palpable. Constantly seeing sick people has also made scenarios of personal disability more available, which generally makes me avoid bad health choices out of fear. End of life care where I'm from is in an abysmal condition and I don't ever want to experience it myself. I fear it far more than death itself.

I'm also on the fence and wondering if cryonics are worth it (especially since I'm in France where there is no real option for it, so in addition to costs it would likely mean changing country), but I think you made two flaws in your (otherwise interesting) reasoning :

It's neutral from a point of pleasure vs suffering for the dead person

It forgets opportunity costs. Dying deprive the person of all the future experience (s)he could have, so of a huge amount of pleasure (and potentially suffering too).

So: my death feels bad, but not infinitely bad. Ob

... (read more)
3byrnema10y
I feel like being revived in the future would be a new project I am not yet emotionally committed to. I think I would be / will be very motivated to extend my life, but when it comes to expending effort to "come back", I realize I feel some relief with just letting my identity go. The main reason behind this is that what gives my life value are my social connections, without them I am just another 'I', no different than any other. It seems just as well that there be another, independent birth than my own revival. One reason I feel this way is from reading books -- being the 'I' in the story always feels the same. This would all of course change if my family was signing up.
0Richard_Kennaway10y
Suppose that due to political upheavals you suddenly had to emigrate on your own. If you stay you will die, and if you leave you will lose your connections. Would you not leave, with regret certainly, but make new connections in your new home? In the present day world, many people have to do this. Cryonics is like emigration. You leave this time and place because otherwise you die, get into a flimsy boat that may well sink on the trip, and possibly emerge into a new land of which you know nothing. To some it is even a desirable adventure.
2byrnema10y
Hmm...I wonder to what extent emigrating a relative 'lot' has formed my ideas about identity. Especially when I was younger, I did not feel like my identity was very robust to abrupt and discordant changes, usually geographic, and just accepted that different parts of my life felt different. I did enjoy change, exactly as an adventure, and I have no wish to end experience. However, with a change as discontinuous as cryonics (over time and social networks), I find that I'm not attached to particular components of my identity (such as gender and profession and enjoying blogging on Less Wrong, etc) and in the end, there's not much left save the universal feeling of experience -- the sense of identity captured by any really good book, the feeling of a voice and a sympathetic perception. To illustrate, I would be exceptionally interested in a really realistic book about someone being resuscitated from cryonics (I find books more immersive than movies), but I wouldn't feel that 'I' needed to be the main character of that book, and I would be very excited to discover that my recent experience as a human in the 21st century has been a simulation, preparing me in some way for revival tomorrow morning in a brave new world...as a former Czech businessman.
0kilobug10y
I think you're going too far when saying it's "no different than any other", but I agree with the core idea - being revived without any of my social connections in an alien world would indeed significantly change "who I am". And it's one of the main reason for which while I do see some attraction in cryonics, I didn't do any serious move in that direction. It would be all different if a significant part of my family or close friends would sign too.
0byrnema10y
Hmm..actually, you have a different point of view. I feel like I would have the same identity even without my social connections; I would have the specific identity that I currently have if I was revived. My point was more along the lines it doesn't matter which identity I happened to have -- mine or someone else's, it wouldn't matter. Consider that you have a choice whether to be revived as a particular Czech business man or as a particular medical doctor from Ohio (assuming for the hypothetical, that there was some coherent way to map these identities to 'you'). How would you pick? Maybe you would pick based on the values of your current identity, kilobug. However, that seems rather arbitrary as these aren't the values exactly of either the Czech business man or the doctor from Ohio. I imagine either one of them would be happy with being themselves. Now throw your actual identity in the mix, so that you get to pick from the three. I feel that many people examine their intuition and feel they would prefer that they themselves are picked. However, I examine my intuition and I find I don't care. Is this really so strange?
0byrnema10y
But I wanted to add ... if the daughter of the person from Ohio is also cryonicized and revived (somewhat randomly, I based my identities on the 118th and 88th patients at Alcor, though I don''t know what their professions were, and the 88th patient did have a daughter), I very much hope that the mother-daughter pair may be revived together. That, I think, would be a lot of fun to wake up together and find out what the new world is like.

Just like future cryonics research might be able to revive someone who was frozen now, perhaps future time travellers could revive people simply by rescuing them from before their death. Of course, time travellers can't revive people who died under all circumstances. Someone who dies in a hospital and has had an autopsy couldn't be rescued without changing the past.

Therefore, we should start a movement where dying people should make sure that they die inside hermetically sealed capsules that are placed in a vault which is rarely opened. If time travel i... (read more)

7Calvin10y
Would you like to live forever? For just 50$ monthly fee, agents of Time Patrol Institute promise to travel back in time extract your body a few miliseconds before death. In order to avoid causing temporal "parodxes", we pledge to replace your body with (almost) identical artificially constructed clone. After your body is extracted and moved to closest non-paradoxical future date we will reverse damage caused by aging, increase your lifespan to infinity and treat you with a cup of coffee. While we are fully aware that time travel is not yet possible, we believe that recent advances in nanotechnology and quantum physics, matched by your generous donations would hopefully allow us to construct a working time machine at any point in the future. Why not Cryonics? For all effective altruists in the audience, please consider that utility of immortalizing entire humankind is preferable to saving only those few of us who underwent cryonic procedures. If you don't sign your parents for temporal rescue, you are a lousy son. People who tell you otherwise are simply voicing their internalized deathist-presentists prejudices. For selfish practically minded agents living in the 2014, please consider that while in order for you to benefit from cryonics it is mandatory that correct brain preservation techniques are developed and popularized during your lifespan, time travel can be developed at any point in the future, there is no need for hurry. Regards, Blaise Pascal, CEO of TPI
-2Lumifer10y
What?!!? Not tea? I unwilling to be reborn into such a barbaric environment. Wouldn't it be simpler to convert to Mormonism? :-D
2MugaSofer10y
Why not? Just replace them with a remote-controlled clone, or upload them from afar (using magic, obviously), or rewrite everyone's memories of you ...
-1Jiro10y
Putting a body in a sealed capsule in a vault requires no magical technology (not counting the time travel itself). Someone who has time travel but otherwise our present level of technology could rescue someone who is allowed to die using the vault method. (Although hopefully they do have better medical technology or rescuing the dying person wouldn't do much good.) It's true, of course, that if the time travellers have other forms of advanced technology they might be able to rescue a wider range of people, but the safest way to go is to use my method. Note that time travel interacts with cryonics in the same way: perhaps you don't need to freeze someone because a future person could time-travel and upload them from afar. Besides, why would you take the risk of the time travellers not being able to do this, considering that being put in a capsule for a few days is pretty cheap? You're a lousy parent if you don't sign your kids up for it....
1MugaSofer10y
Well, obviously here you run into your priors as to what sort of tech level is implied by time travel - much as discussions of cryonics run into the different intuitions people have about what a future where we can bring back the frozen will look like. ("The only economic incentive for such an expensive project - except perhaps for a few lucky individuals who would be used as entertainment, like animals in a zoo - would be to use them as an enslaved labor force.") With that said ... ... once time travel has come into play, surely all that matters is whether the magical technology in question will eventually be developed?
1Jiro10y
Not only do you need to have time travellers, you need to have time travellers who are interested in reviving you. The farther in the future you get the less the chance that any time travellers would want to revive you (although they might always want someone for historical interest so the chance might never go down to zero.) The more advanced the technology required, the longer it'll be and the less the chance they'll want to bother. Perhaps you could go the cryonics-like route and have a foundation set up whose express purpose is to revive people some time in the future in exchange for payments now. While unlike cryonics there is no ongoing cost to keep you in a position where you can be saved, the cost to keep someone wanting to save you is still ongoing. This would still be subject to the same objections used for cryonics foundations. Of course, like cryonics, you can always hope that someone creates a friendly AI who wants to save as many people as it can. There's also the possibility that some technology simply will not be developed. Perhaps there are some fundamental quantum limits that prevent getting an accurate remote scan of you. Perhaps civilization has a 50% chance of dying out before they invent the magical technology.
0MugaSofer10y
Like I said, I guess this comes down to how you imagine such a future looking beyond "has time travel". I tend to assume some sort of post-scarcity omnibenevolent utopia, myself ... Before they invent any magical technology, you mean. There's more than one conceivable approach to such a last-second rescue.
0TheOtherDave10y
What is your estimate of the ratio of the probability of my being "rescued" given a sealed capsule, to that of my being rescued absent a sealed capsule?
2Jiro10y
I have no idea. I'm sure I could come up with an estimate in a similar manner to how people make estimates for cryonics, though.

Some context googled together from earlier LW posts about this topic:

From the recent ChrisHallquists $500 thread a comment that takes the outside view and comes to a devasting conclusion: http://lesswrong.com/r/discussion/lw/jgu/i_will_pay_500_to_anyone_who_can_convince_me_to/acd5

In the discussion of a relevant blog post we have this critical comment: http://lesswrong.com/user/V_V/overview/

In the Neil deGrasse Tyson on Cryonics post a real Neuroscientist gave his very negative input: http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryonics/... (read more)

5Swimmer963 (Miranda Dixon-Luinenburg) 10y
Yes. My calculations are lazy. I cobbled together the ideas of this post in a conversation that took place when I was supposed to be sleeping, and when I wrote it a few days later, it was by carving 2 hours out after my bedtime. Which won't be happening again tonight because I can only work so many 12 hour shifts on five hours of sleep a night. The alternative wasn't doing better calculations; it was not doing any calculations at all and sticking with my bottom line that cryonics doesn't feel like something I want to do, just because. Also: the reason I posted this publically almost as soon as I had the thought of writing it at all was to get feedback. So thank you. I will hopefully read through all the feedback and take it into account the next time I would rather do that than sleeping.

The possibility of a friendly ultra-AI greatly raises the expected value of cryonics. Such an AI would likely create a utopia that you would very much want to live in. Also, this possibility reduces the time interval before you would be brought back, and so makes it less likely that your brain would be destroyed before cryonics revival becomes possible. If you believe in the likelihood of a singularity by, say, 2100 then you can't trust calculations of the success of cryonics that don't factor in the singularity.

4Ben Pace10y
Which causes me to think if another argument: if you attach a high probability to an Ultra-AI which doesn't quite have a perfectly aligned utility function, do you want to be brought back into a world which has or could have an UFAI?
1James_Miller10y
Because there is a limited amount of free energy in the universe, unless the AI's goals incorporated your utility function it wouldn't bring you back and indeed would use the atoms in your body to further whatever goal it had. With very high probability, we only get an UFAI that would (1) bring you back and (2) make our lives have less value than they do today if evil humans deliberately put a huge amount of effort into making their AI unfriendly, programming in torturing humans as a terminal value.
1ialdabaoth10y
Alternate scenario 1: AI wants to find out something that only human beings from a particular era would know, brings them back as simulations as a side-effect of the process it uses to extract their memories, and then doesn't particularly care about giving them a pleasant environment to exist in. Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.
0[anonymous]10y
Good news: this one's remarkably unlikely, since almost all existing Friendly AI approaches are indirect ("look at some samples of real humans and optimize for the output of some formally-specified epistemic procedure for determining their values") rather than direct ("choirs of angels sing to the Throne of God").
0TheOtherDave10y
Not sure how that helps. Would you prefer scenario 2b, with "[..] because its formally-specified epistemic procedure for determining the values of its samples of real humans results in a concept of value-maximization that has a hideously unfortunate implication."?
1[anonymous]10y
You're saying that enacting the endorsed values of real people taken at reflective equilibrium has an unfortunate implication? To whom? Surely not to the people whose values you're enacting. Which does leave population-ethics a biiiiig open question for FAI development, but it at least means the people whose values you feed to the Seed AI get what they want.
1TheOtherDave10y
No, I'm saying that (in scenario 2b) enacting the result of a formally-specified epistemic procedure has an unfortunate implication. Unfortunate to everyone, including the people who were used as the sample against which that procedure ran.
0[anonymous]10y
Why? The whole point of a formally-specified epistemic procedure is that, with respect to the people taken as samples, it is right by definition.
2TheOtherDave10y
Wonderful. Then the unfortunate implication will be right, by definition. So what?
4[anonymous]10y
I'm not sure what the communication failure here is. The whole point is to construct algorithms that extrapolate the value-set of the input people. By doing so, you thus extrapolate a moral code that the input people can definitely endorse, hence the phrase "right by definition". So where is the unfortunate implication coming from?
5VAuroch10y
A third-party guess: It's coming from a flaw in the formal specification of the epistemic procedure. That it is formally specified is not a guarantee that it is the specification we would want. It could rest on a faulty assumption, or take a step that appears justified but in actuality is slightly wrong. Basically, formal specification is a good idea, but not a get-out-of-trouble-free card.
1[anonymous]10y
Replying elsewhere. Suffice to say, nobody would call it a "get out of trouble free" card. More like, get out of trouble after decades of prerequisite hard work, which is precisely why various forms of the hard work are being done now, decades before any kind of AGI is invented, let alone foom-flavored ultra-AI. Reply.
1TheOtherDave10y
I have no idea if this is the communication failure, but I certainly would agree with this comment.
0[anonymous]10y
Thanks!
0TheOtherDave10y
I'm not sure either. Let me back up a little... from my perspective, the exchange looks something like this: ialdabaoth: what if failed FAI is incorrectly implemented and fucks things up? eli_sennesh: that won't happen, because the way we produce FAI will involve an algorithm that looks at human brains and reverse-engineers their values, which then get implemented. theOtherDave: just because the target specification is being produced by an algorithm doesn't mean its results won't fuck things up e_s: yes it does, because the algorithm is a formally-specified epistemic procedure, which means its results are right by definition. tOD: wtf? So perhaps the problem is that I simply don't understand why it is that a formally-specified epistemic procedure running on my brain to extract the target specification for a powerful optimization process should be guaranteed not to fuck things up.
1[anonymous]10y
Ah, ok. I'm going to have to double-reply here, and my answer should be taken as a personal perspective. This is actually an issue I've been thinking about and conversing over with an FHI guy, I'd like to hear any thoughts someone might have. Basically, we want to extract a coherent set of terminal goals from human beings. So far, the approach to this problem is from two angles: 1) Neuroscience/neuroethics/neuroeconomics: look at how the human brain actually makes choices, and attempt to describe where and how in the brain terminal values are rooted. See: Paul Christiano's "indirect normativity" write-up. 2) Pure ethics: there are lots of impulses in the brain that feed into choice, so instead of just picking one of those, let's sit down and do the moral philosophy on how to "think out" our terminal values. See: CEV, "reflective equilibrium", "what we want to want", concepts like that. My personal opinion is that we also need to add: 3) Population ethics: given the ability to extract values from one human, we now need to sample lots of humans and come up with an ethically sound way of combining the resulting goal functions ("where our wishes cohere rather than interfere", blah blah blah) to make an optimization metric that works for everyone, even if it's not quite maximally perfect for every single individual (that is, Shlomo might prefer everyone be Jewish, Abed might prefer everyone be Muslim, John likes being secular just fine, the combined and extrapolated goal function doesn't perform mandatory religious conversions on anyone). Now! Here's where we get to the part where we avoid fucking things up! At least in my opinion, and as a proposal I've put forth myself, if we really have an accurate model of human morality, then we should be able to implement the value-extraction process on some experimental subjects, predictively generate a course of action through our model behind closed doors, run an experiment on serious moral decision-making, and then find a
5TheOtherDave10y
So, if I've understood your proposal, we could summarize it as: Step 1: we run the value-extractor (seed AI, whatever) on group G and get V. Step 2: we run a simulation of using V as the target for our optimizer. Step 3: we show the detailed log of that simulation to G, and/or we ask G various questions about their preferences and see whether their answers match the simulation. Step 4: based on the results of step 3, we decide whether to actually run our optimizer on V. Have I basically understood you? If so, I have two points, one simple and boring, one more complicated and interesting. The simple one is that this process depends critically on our simulation mechanism being reliable. If there's a design flaw in the simulator such that the simulation is wonderful but the actual results of running our optimizer is awful, the result of this process is that we endorse a wonderful world and create a completely different awful world and say "oops." So I still don't see how this avoids the possibility of unfortunate implications. More generally, I don't think anything we can do will avoid that possibility. We simply have to accept that we might get it wrong, and do it anyway, because the probability of disaster if we don't do it is even higher. The more interesting one... well, let's assume that we do steps 1-3. Step 4 is where I get lost. I've been stuck on this point for years. I see step 4 going like this: * Some members of G (G1) say "Hey, awesome, sign me up!" * Other members of G (G2) say "I guess? I mean, I kind of thought there would be more $currently_held_sacred_value, but if your computer says this is what I actually want, well, who am I to argue with a computer?" * G3 says "You know, that's not bad, but what would make it even better is if the bikeshed were painted yellow." * G4 says "Wait, what? You're telling me that my values, extrapolated and integrated with everyone else's and implemented in the actual world, look like that?!? But... but... tha
0[anonymous]10y
The idea is not to run a simulation of a tiny little universe, merely a simulation of a few people's moral decision processes. Basically, run a program that prints out what our proposed FAI would have done given some situations, show that to our sample people, and check if they actually endorse the proposed course of action. (There's another related proposal for getting Friendly AI called value learning, which I've been scrawling notes on today. Basically, the idea is that the AI will keep a pool of possible utility functions (which are consistent, VNM-rational utility functions by construction), and we'll use some evidence about humans to rate the probability that a given utility function is Friendly. Depending on the details of this whole process and the math actually working out, you would get a learning agent that steadily refines its utility function to be more and more one that humans can endorse.) This is why I did actually say that population ethics is a wide-open problem in machine ethics. Meaning, yes, the population has broken into political factions. Humans have a noted tendency to do that. Now, the whole point of Coherent Extrapolated Volition on a population-ethics level was to employ a fairly simple population-ethical heuristic: "where our wishes cohere rather than interfere". Which, it seems to me, means: if people's wishes run against each-other, do nothing at all, do something only if there exists unanimous/near-unanimous/supermajority agreement. It's very democratic, in its way, but it will probably also end up implementing only the lowest common denominator. The result I expect to see from a naive all-humanity CEV with that population-ethic is something along the lines of, "People's health is vastly improved, mortality becomes optional, food ripens more easily and is tastier, and everyone gets a house. You humans couldn't actually agree on much more." Which is all pretty well and good, but it's not much more than we could have gotten without
1TheOtherDave10y
So, suppose we do this, and we conclude that our FAI is in fact capable of reliably proposing courses of actions that, in general terms, people endorse. It seems clear to me that's not enough to show that it will not fuck things up when it comes time to actually implement changes in the real world. Do you disagree? Because back at the beginning of this conversation, it sounded like you were claiming you had in mind a process that was guaranteed not to fuck up, which is what I was skeptical about. Well, I certainly expect that to work better than not using evidence. Beyond that, I'm really not sure what to say about it. Here again... suppose this procedure works wonderfully, and as a consequence of climbing that hill we end up with a consistent set of VNM-rational utility functions that humans reliably endorse when they read about them. It seems clear to me that's not enough to show that it will not fuck things up when it comes time to actually implement changes in the real world. Do you disagree? Now you might reply "Well it's the best we can do!" and I might agree. As I said earlier, we simply have to accept that we might get it wrong, and do it anyway, because the probability of disaster if we don't do it is even higher. But let's not pretend there's no chance of failure. I'm not sure I would describe those subgroups as political factions, necessarily... they're just people expressing opinions at this stage. But sure, I could imagine analogous political factions. Well, now, this is a different issue. I actually agree with you here, but I was assuming for the sake of argument that the CEV paradigm actually works, and gets a real, worthwhile converged result from G. That is, I'm assuming for the same of comity that G actually would, if they were "more the people they wished to be" and so on and so forth in all the poetic language of the CEV paper, agree on V, and that our value-extractor somehow figures that out because it's really well-designed. My point was
-1[anonymous]10y
OOOOOOOOOOOOOOOOOOOOH. Ah. Ok. That is actually an issue, yes! Sorry I didn't get what you meant before! My answer is: that is an open problem, in the sense that we kind of need to know much more about neuro-ethics to answer it. It's certainly easy to imagine scenarios in which, for instance, the FAI proposes to make all humans total moral exemplars, and as a result all the real humans who secretly like being sinful, even if they don't endorse it, reject the deal entirely. Yes, we have several different motivational systems, and the field of machine ethics tends to brush this under the rug by referring to everything as "human values" simply because the machine-ethics folks tend to contrast humans with paper-clippers to make a point about why machine-ethics experts are necessary. This kind of thing is an example of the consideration that needs to be done to get somewhere. You are correct in saying that if FAI designers want their proposals to be accepted by the public (or even the general body of the educated elite) they need to cater not only to meta-level moral wishes but to actual desires and affections real people feel today. I would certainly argue this is an important component of Friendliness design. This assumes that people are unlikely to endorse smart ideas. I personally disagree: many ideas are difficult to locate in idea-space, but easy to evaluate. Life extension, for example, or marriage for romance. No, I have not solved AI Friendliness all on my lonesome. That would be a ridiculous claim, a crackpot sort of claim. I just have a bunch of research notes that, even with their best possible outcome, leave lots of open questions and remaining issues. Certainly there's a chance of failure. I just think there's a lot we can and should do to reduce that chance. The potential rewards are simply too great not to.
0James_Miller10y
For scenario 1, it would almost certainly require less free energy just to get the information directly from the brain without ever bringing the person to consciousness. For scenario 2, you would seriously consider suicide if you fear that a failed friendly AI might soon be developed. Indeed, since there is a chance you will become incapacitated (say by falling into a coma) you might want to destroy your brain long before such an AI could arise.
0Decius10y
It's also possible that the AI finds instrumental utility in having humans around, and that reviving cryonics patients is cheaper than using their Von Newman factories.
3James_Miller10y
I disagree. Humans almost certainly do not efficiently use free energy compared to the types of production units an ultra-AI could make.
0Decius10y
How expensive is it to make a production unit with the versatility and efficiency of a human? How much of that energy would simply be wasted anyway? Likely, no, but possible. Rolling all of that into 'cryonics fails' has little effect on the expected value in any case.
0[anonymous]10y
There's really not that much margin of error in Super Tengen Toppa AI design. The more powerful it is, the less margin for error. It's not like you'd be brought back by a near-FAI that otherwise cares about human values but inexplicably thinks chocolate is horrible and eliminates every sign of it.
0V_V10y
I don't think it would make much difference. Consider my comment in Hallquist's thread: AI singularity won't affects points 1 and 2: If information about your personality has not been preserved, there is nothing an AI can do to revive you. It might affect points 3 and 4, but to a limited extent: an AI might be better than vanilla humans at doing research, but it would not be able to develop technologies which are impossible or intrinsically impractical for physical reasons. A truly benevolent AI might be more motivated to revive cryopatients than regular people with selfish desires, but it would still have to allocate its resources economically, and cryopatient revival might not be the best use of them. Points 5 and 6: clearly the sooner the super-duper AI appears and develops revival tech, the higher the probability that your cryoremains are still around, but super AI appearing early and developing revival tech soon is less probable than it appearing late and/or taking a long time to develop revival tech, hence I would think that the two effects roughly cancel out. Also, as other people have noted, super AI appearing and giving you radical life extension within your lifetime would make cryonics a waste of money. More generally, I think that AI singularity is itself a conjunctive event, with the more extreme and earlier scenarios being less probable than the less extreme and later ones. Therefore I don't think that taking into accounts AIs should significantly affect any estimation of cryonics success.
0James_Miller10y
The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.
2V_V10y
Well, I haven't read your book, hence I can't exclude that you might have made some good arguments I'm not aware of, but given the publicly available arguments I know, I don't think this is true. Is it? There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain. We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large? Will there be a demand for such humans? Will they devote their efforts to AI research, or will their comparative advantage drive them to something else? And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI). And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it. In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe. I think it's better to recognize that we don't have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.
0[anonymous]10y
How do you factor in uFAI or other bad revival scenarios?

Even though LW is far more open to the idea of cryonics than other places, the general opinion on this site still seems to be that cryonics is unlikely to succeed (e.g. has a 10% chance of success).

How do LW'ers reconcile this with the belief that mind uploading is possible?

6TheOtherDave10y
I can't speak for anyone else, but I don't see a contradiction. Believing that a living brain's data can be successfully uploaded eventually doesn't imply that the same data can necessarily be uploaded from a brain preserved with current-day tech. The usual line I see quoted is that cryonics tech isn't guaranteed to preserve key data, but it has a higher chance than rot-in-a-box tech or burn-to-ash tech.
0passive_fist10y
So are you saying that this key data includes delicate fine molecular information, which is why it cannot be preserved with current tech?
0TheOtherDave10y
Nope, I'm not saying that. There are many systems that both don't depend on fine molecular information, and also are easier to restore from being vitrified than to restore from being burned to ash.
0passive_fist10y
Would you agree with shminux's reply then?
0TheOtherDave10y
Certainly shminux's reply isn't what I had in mind initially, if that's what you mean. As for whether I agree with it on its own terms... I'm not sure. Certainly I lack sufficient neurochemical domain knowledge to make meaningful estimates here, but I'm not as sure as they sound that everyone does.
0shminux10y
No one yet knows what the data substrate includes or how much of it has to be preserved for meaningful revival. For all we know, a piece of neocortex dropped into liquid nitrogen might do the trick in a pinch. Or maybe not even the best current cryo techniques would be enough. But it is not really possible to give a meaningful estimate, as cryonics does not appear to be in any reference class for which well-calibrated predictions exist.
0Calvin10y
Here is a parable illustrating relevant difficulty of both problems: *Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English. This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks: * Imagine the manuscript has been preserved using correct means and all letters are still there. Uploading is easy. There is no data loss, so it is equivalent to uploading modern manuscript. This means that monks were smart enough to choose optimal storage procedure (or got there by accident) - very unlikely. * Imagine the manuscript has been preserved using decent means and some letters are still there. Now, we have to do a bit of guesswork... is the manuscript we translate the same thing original author had in mind? EY called it doing intelligent cryptography on a partially preserved brain, as far as I am aware. Monks knew just enough not to screw up the process, but their knowledge of manuscript-preservation-techniques was not perfect. * Imagine the manuscript has been preserved using decent means all traces have vanished without trace. Now we are royally screwed, or we can wait a couple of thousands of millions years so that oracle computer can deduce state of manuscript by reversing entropy. This means monks know very little about manuscript-preservation. * Imagine there is no manuscript. There is a nice wooden box preserved with astonishing details, but manuscript have crumbled when monks put it inside. Well, the monks who wanted to preserve manuscript didn't know that preserving the box does not help to preserve the manuscript, but they tried, right? This means monks don't understand connection between manuscript and box preservation techn
0passive_fist10y
Are you saying that accurate preservation depends on highly delicate molecular states of the brain, and this is the reason they cannot be preserved with current techniques?
0Calvin10y
I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community. Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?
0passive_fist10y
It seems like you're saying you don't know whether cryonics can succeed or not. Whereas in your first reply you said "therefore cryonics in the current shape or form is unlikely to succeed."
-2Calvin10y
Yes. I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.

My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.

I'm with you on this. And I hope to see more di... (read more)

2TheOtherDave10y
Is murder bad, on your view? If so, why?
0Brillyant10y
Yes, murder is bad. It's horribly traumatic to everyone surrounding it. It violates the victim's will. It isn't sustainable in a society. It leads to consequences that I see as a significant net negative. I'm not really talking about that, though. As the OP says, "[death] would suck, sure. It would suck a lot. But it wouldn't suck infinitely." This, to me, is the key. I used to be an Evangelical Christian. We had the hell doctrine, which we took very seriously and literally as eternal conscious torment. That was something I would have considered to "suck infinitely" and therefore would justify a cryonics-styled response (i.e. salvation and the proselytization of the means of salvation). Plain old death by old age certainly sucks, but its not the end of the world...or the beginning of hell. It isn't something to be "saved" from. Perhaps a better goal would be X years of life for everyone with zero disease or gratuitous pain. Immortality (or inconceivably long lifespans) seems a bit misguided and tied to our more animal, survive-at-all-costs nature.
0TheOtherDave10y
Agreed that murder is significantly net-negative, but not infinitely so. (This is very different from what I thought you were saying originally, when you suggested death was neutral.) Is dying of natural causes bad, on your view? (I'm not asking infinitely bad. Just if it's net-negative.) If so, what level of response do you consider appropriate for the level of net-negative that dying of natural causes in fact is? For example, I gather you believe that cryonics advocates over-shoot that response... do you believe that a typically chosen person on the street is well-calibrated in this respect, or undershoots, or overshoots...?
1Brillyant10y
I'm not sure, but I don't think so. I don't think death is good -- It makes people sad, etc. But I don't think it is bad enough to lead to the sort of support cryonics gets on here. Also, "natural causes" would need some clarification in my view. I'm all for medical technology elimination gratuitous suffering cause by naturally occuring diseases. I just think at some point -- 100 years or 1000 years or whatever -- perpetual life extension is moot. Death is an end to your particular consciousness -- to your own sense of your self. That is all it is. It's just turning the switch off. Falling asleep and not waking. The process of dying sucks. But being dead vs. being alive seems to me to be inconsequential in some sense. It isn't as if you will miss being alive... The average person on the street is very afraid (if only subconciously) of death and "overshoots" their response. Lots of people have religion for this.
2TheOtherDave10y
I'm... confused. You seem to believe that, if we take into consideration the experiences of others, death is bad. ("It makes people sad, etc.") I agree, and consider it a short step from there to "therefore death is bad," since I do in fact take into consideration the experiences of others. But you take a detour I don't understand between those two points, and conclude instead that death is neutral. As near as I can figure it out, your detour involves deciding not to take into consideration the experiences of others, and to evaluate death entirely from the perspective of the dead person. I understand perfectly well how you conclude that death is no big deal (EDIT: inconsequential) from that perspective. What I don't understand is how you arrive at that perspective, having started out from a perspective that takes the experiences of others into account.
0Brillyant10y
I arrive at the conclusion that death is not good, yet not bad through something like philosophical Buddhism. While I wouldn't say death is "no big deal" (in fact, it is just about the biggest deal we face as humans), I would argue we are wired via evolution, including our self-aware consciousness, to make it into a much, much bigger deal than it need be. I think we should consider the experience of others, but I don't think it should drive our views in regard to death. People will (and of course should) grieve. But it is important to embrace some sense of solidarity and perspective. No one escapes pain in one form or another. I actually think it would be helpful to our world to undergo a reformation in regard to how we think about death. We are hardwired to survive-at-all-costs. That is silly and outdated and selfish and irrational. It is a stamp of our lowly origins...
0TheOtherDave10y
I've edited to replace "no big deal" with "inconsequential," which is the word you used. They seem interchangeable to me, but I apologize for putting words in your mouth. Sure, that's certainly true. And that's true, too. Also true... which is not itself a reason to eschew reducing the pain of others, or our own pain. It's important and beneficial to embrace a sense of solidarity and perspective about all kinds of things... polio, child abuse, mortality, insanity, suffering, traffic jams, tooth decay, pebbles in my shoe, leprosy. It's also important and beneficial to improve our environment so that those things don't continue to trouble us. (shrugs) Sure. But there's a huge gulf between "don't survive at all costs" and "death is neutral." I understand how you get to the former. I'm trying to understand how you get to the latter. But, OK, I think I've gotten as much of an answer as I'm going to understand. Thanks for your time.
0Brillyant10y
To be clear, I said the difference to the person "experiencing" being dead vs. being alive in inconsequential. The process of dying, including the goodbyes, sucks. Of course. Though I think it is helpful to temper our expectations. That is all I meant. Death is only a thing because life is a thing. It just is. I'd say it's peculiar (though natural) thing to apply value to it. Maybe this tact: What if we solve death? What if we find a way to preserve everyone's consciousness and memory (and however else you define death transendence)? Is that "better"? Why? How? Because you get more utilons and fuzzies? Does the utilon/fuzzy economy collapse sans death? More than that, it seems very rational people should be able to recognize that someone "dying" is nothing more than the flame of consciousness being extinguished. A flame that existed because of purely natural mechanisms. There is no "self" to die. A localized meat-hardware program (that you were familiar with and brought you psychological support) shut down. "Your" meat-hardware program is responding in turn with thoughts and emotions. I mentioned Buddhism... as it pertains here, I see it as this: Death will be "bad" to you to the extent you identify with your "self". I am not my meat-hardware. I notice my meat-hardware via the consciousness it manifests. I notice my meat-hardware is hardwired to hate and fear death. I notice my meat-hardware will break someday -- it may even malfunction significantly and alter the manifest consciousness through which I view it... In this sort of meditation, death, to me, is neutral.
0TheOtherDave10y
OK. Thanks for clarifying your position.
-2blacktrance10y
Assuming your life has non-infinitesimal positive value to you, even if losing one year of life would be a minor loss, losing some large number of years would be an enormous loss. Given that you'd be alive if you wouldn't die, you lose a lot from death.
0Brillyant10y
Is infinite life optimal then?
-2blacktrance10y
If in net it has positive value, yes. If not, it's best to have life that's infinite unless you choose to terminate it.
0Brillyant10y
So, life is valuable until it is no longer valuable?
-2blacktrance10y
If your life is valuable and adding more of it doesn't make its value negative at any point, then more of your life is better than less of your life.
2Brillyant10y
The math seems much clearer to you than I, so let me ask: Is it possible that immortality as an option would dilute life's value when compared to a more tradtional human existence (75 years, dies of natural causes)? I can imagine a 150-year lifespan being preferable to 75; 300 to 150; 1000 to 300; etc. And even when the numbers get very large and I cannot imagine the longer lifespan being better, I chalk it up to my weak imagination. But what about infinite life? does the math break down if you could live -- preserve "your" consciousness, memories, etc. -- forever?
2Swimmer963 (Miranda Dixon-Luinenburg) 10y
Very large but non-infinite numbers are more likely to be what's on the table, I think. Given that something is likely to catch up with a future human society, even one capable of reviving frozen people–even if it's just the heat death of the universe.
0blacktrance10y
It may be important to explicitly distinguish between "could live forever" and "have to live forever", as the former excludes having to float in outer space for eternity, which would certainly be a life of negative value. I don't see why the math would break down. As long as you anticipate your life continuing to have a net positive value, you should want to continue it. And I don't see why that would change just from your lifespan increasing, as long as you stay healthy.
0Brillyant10y
The distinction you mention is very important, and it is one I tried to communicate I was aware of. Of course we can conceive of lots of circumstances where life "having" to continue would be bad... The question is whether unlimited life renders everything valueless? It seems to me that some big chunk of life's value lies in it's novelty, and another big chunk in relatively rare and unique experiences, and another big chunk in overcoming obstacles... eternal life ruins all of that I think. Mathematically, wouldn't every conceivable possibility be bound to occur over and over if you lived forever?
0blacktrance10y
I doubt that novelty, rarity, or overcoming obstacles have any value by themselves, only that they are associated with good things. But supposing that they had a value of their own - do they encompass all of life's value? If novelty/rarity/obstacles were eliminated, would life be a net negative? It seems implausible. Not if new possibilities are being created at the same time. In fact, it's probable that an individual's proportion of (things done):(things possible) would decrease as time passes, kind of like now, when the number of books published per year exceeds how much a person would want to read.
-2Lumifer10y
Given that curiosity seems to be a hardwired-in biological urge, I would expect that novelty and rare experiences do have value by themselves.
0blacktrance10y
Fulfilling a biological urge need not be something of value. For example, eating when you're hungry feels good, but it may be good to abolish eating food altogether.
-2Lumifer10y
Your frontal cortex might decide it's not something of value, but the lower levels of your mind will still be quite sure it is. Hardwired is hardwired.