Related To: You Only Live Twice, Normal Cryonics, Abnormal Cryonics, The Threat Of Cryonics, Doing your good deed for the day, Missed opportunities for doing well by doing good

Summary: Many Less Wrong posters are interested in advocating for cryonics. While signing up for cryonics is an understandable personal choice for some people, from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity. People who sign up for cryonics out of a generalized concern for others would do better not to sign up for cryonics and instead donating any money that they would have spent on cryonics to a cost-effective charity. People who are motivated by a generalized concern for others to advocate the practice of signing up for cryonics would do better to advocate that others donate to cost-effective charities.

Added 08/12:  The comments to this post have prompted me to add the following disclaimers:

(1) Wedrifid understood me to be placing moral pressure on people to sacrifice themselves for the greater good. As I've said elsewhere, "I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of view, I think that there are good reasons for thinking that it would be a bad idea to do this." My motivation for posting on this topic is the one described by rhollerith_dot_com in his comment.

(2) In line with the above comment, when I say "selfish" I don't mean it with the negative moral connotations that the word carries, I mean it as a descriptive term. There are some things that we do for ourselves and there are some things that we do for others - this is as things should be. I'd welcome any suggestions for a substitute for the word "selfish" that has the same denotation but which is free of negative conotations.

(3) Wei_Dai thought that my post assumed a utilitarian ethical framework. I can see how my post may have come across that way. However, while writing the post I was not assuming that the reader ascribes to utilitarianism. When I say "we should" in my post I mean "to the extent that we ascribe to utilitarianism we should." I guess that while writing the post I thought that this would be clear from context, but turned out to have been mistaken on this point.

As an aside, I do think that there are good arguments for a (sophisticated sort of) utilitarian ethical framework. I will make a post about this after reading Eliezer's posts on utilitarianism.

(4) Orthonormal thinks that I'm treating cryonics differently from other expenditures. This is not the case, from my (utilitarian) point of view, expenditures should be judged exclusively based on their social impact. The reason why I wrote a post about cryonics is because I had the impression that there are members of the Less Wrong community who view cryonics expenditures and advocacy as "good" in a broader sense than I believe is warranted. But (from a utilitarian point of view) cryonics is one of thousands of things that people ascribe undue moral signficance to. I certainly don't think that advocacy of and expenditures on "cryonics" is worse from a utilitarian point of view than advocacy of and expenditures on something like "recycling plastic bottles".

I've also made the following modifications to my post

(A) In response to a valid objection raised by Vladimir_Nesov I've added a paragraph clarifying that Robin Hanson's suggestion that cryonics might be an effective charity is based on the idea that doing so will drive costs down, and explanation for why I think that my points still hold.

(B) I've added a third example of advocacy of cryonics within the Less Wrong community to make it more clear that I'm not arguing against a straw man.

Without further ado, below is the main body of the revised post.


Advocacy of cryonics within the Less Wrong community

Most recently, in Christopher Hitchens and Cryonics, James_Miller wrote:

I propose that the Less Wrong community attempt to get Hitchens to at least seriously consider cryonics.


Eliezer has advocated cryonics extensively. In You Only Live Twice, Eliezer says:

If you've already decided this is a good idea, but you "haven't gotten around to it", sign up for cryonics NOW.  I mean RIGHT NOW.  Go to the website of Alcor or the Cryonics Institute and follow the instructions.

[...]

Not signing up for cryonics - what does that say?  That you've lost hope in the future.  That you've lost your will to live.  That you've stopped believing that human life, and your own life, is something of value.

[...]

On behalf of the Future, then - please ask for a little more for yourself.  More than death.  It really... isn't being selfish.  I want you to live.  I think that the Future will want you to live.  That if you let yourself die, people who aren't even born yet will be sad for the irreplaceable thing that was lost.

In Normal Cryonics Eliezer says:

You know what?  I'm going to come out and say it. I've been unsure about saying it, but after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I'm going to come out and say it:  If you don't sign up your kids for cryonics then you are a lousy parent.

In The Threat of Cryonics, lsparrish writes

...we cannot ethically just shut up about it. No lives should be lost, even potentially, due solely to lack of a regular, widely available, low-cost, technologically optimized cryonics practice. It is in fact absolutely unacceptable, from a simple humanitarian perspective, that something as nebulous as the HDM -- however artistic, cultural, and deeply ingrained it may be -- should ever be substituted for an actual human life.

Is cryonics selfish?

There's a common attitude within the general public that cryonics is selfish. This is exemplified by a quote from the recent profile of Robin Hanson and Peggy Jackson in the New York Times article titled Until Cryonics Do Us Part:

“You have to understand,” says Peggy, who at 54 is given to exasperation about her husband’s more exotic ideas. “I am a hospice social worker. I work with people who are dying all the time. I see people dying All. The. Time. And what’s so good about me that I’m going to live forever?”

As suggested by Thursday in a comment to Robin Hanson's post Modern Male Sati, part of what seems to be going on here is that people subscribe to a "Just Deserts" theory of which outcomes ought to occur:

I think another of the reasons that people dislike cryonics is our intuition that immortality should have to be earned. It isn’t something that a person is automatically entitled to.

Relatedly, people sometimes believe in egalitarianism even when achieving it comes at the cost of imposing handicaps on the fortunate as in the Kurt Vonnegut novel Harrison Bergeron.

I believe that the objections that people have to cryonics which are rooted in the belief in people should get what they deserve and in the idea that egalitarianism is so important that we should handicap the privileged to achieve it are maladaptive. So, I think that the common attitude that cryonics is selfish is not held for good reason.

At the same time, it seems very likely to me that paying for cryonics is selfish in the sense that many personal expenditures are. Many personal expenditures that people engage in come with an opportunity cost of providing something of greater value to someone else. My general reaction to cryonics is the same as Tyler Cowen's: rather than signing up for cryonics, "why not save someone else's life instead?"

Could funding cryonics be socially optimal?

In Cryonics As Charity, Robin Hanson explores the idea that paying for cryonics might be a cost-effective charitable expenditure.

...buying cryonics seems to me a pretty good charity in its own right.

[...]

OK, even if consuming cryonics helps others, could it really help as much as direct charity donations? Well it might be hard to compete with cash directly handed to those most in need, but remember that most real charities suffer great inefficiencies and waste from administration costs, agency failures, and the inattention of donors.

Hanson's argument in favor of cryonics as a charity is based on the idea that buying cryonics drives the costs of cryonics down, making it easier for other people to purchase cryonics and also that purchasing cryonics normalizes the practice which raises the probability that people who are cryopreserved will be revived. There are several reasons why I don't find these points a compelling argument for cryonics as a charity. I believe that:

(i) I believe that in absence of human genetic engineering, it's very unlikely that it's possible to overcome the social stigma against cryonics. So I assign a small expected value to the social benefits that Hanson envisages which arise from purchasing cryonics.

(ii) Because of the social stigma against cryonics, signing up for cryonics or advocating cryonicshas a negative unintended consequence of straining interpersonal relationships as hinted at in Until Cryonics Do Us Part. This negative unintended consequence must be weighed against the potential social benefits attached to purchasing cryonics

(iii) Point #3 below: purchasing cryonics may be zero-sum on account of preventing future potential humans and transhumans from living.
Overall I believe that the positive indirect consequences of purchasing cryonics are approximately outweighed by the negative indirect consequences of purchasing cryonics.

How do the direct consequences of cryonics compare with the direct consequences of the best developing world aid charities? Let's look at the numbers.  According to the Alcor website , Alcor charges $150,000 for whole body cryopreservation and $80,000 for Neurocryopreservation.  GiveWell  estimates that VillageReach and StopTB save lives at a cost of $1,000 each. Now, the  standard of living is lower in the developing world  than in the developed world, so that saving lives in the developing world is (on average) less worthwhile than saving lives in developed world. Last February  Michael Vassar estimated  (based on his experience living in the developing world among other things) that one would have to spend $50,000 on developing world aid to save a quality of life comparable to his own. Michael's estimate may be too high or too low, and quality of life within the developed world is variable, but for concreteness let's equate the value of 40 years of life of the typical prospective cryonics sign-up with $50,000 worth of cost-effective developing world aid. Is buying cryonics for oneself then more cost-effective than developing world aid?

Here are some further considerations which are relevant to this question:

  1. Cryopreservation is not a guarantee of revitalization. In  Cryonics As Charity  and elsewhere Robin Hanson has estimated the probability of revitalization at 5% or so.
  2. Revitalization is not a guarantee of a very long life - after one is revived the human race could go extinct.
  3. Insofar as the resources that humans have access to are limited, being revived may have the opportunity cost of another human/transhuman being born.
  4. If humans develop life extension technologies before the prospective cryonics sign-up dies then the prospective cryonics sign-up will probably have no need of cryonics.
  5. If humans develop Friendly AI soon then any people in the developing world whose lives are saved might have the chance to live very long and happy lives.

With all of these factors in mind, I presently believe that from the point of view of general social welfare, donating to VillageReach or StopTB is much more cost-effective than paying for cryopreservation is.

It may be still more cost-effective to fund charities that reduce global catastrophic risk. The question is just whether it's possible to do things that meaningfully reduce global catastrophic risk. Some people in the GiveWell community have the attitude that there's so much  stochastic dilution of efforts to reduce global catastrophic risk that developing world aid is a more promising cause than existential risk reduction is. I share these feelings in regard to SIAI as presently constituted for reasons which I described in  the linked thread . Nevertheless, I personally believe that within 5-10 years there will emerge strong opportunities to donate money to reduce existential risk, opportunities which may be orders of magnitude more cost-effective than developing world aid.

It may be possible to construct a good argument for the idea that funding cryonics is socially optimal. But those who supported cryonics before thinking about whether funding cryonics is socially optimal should beware falling prey to  confirmation bias  in their thinking about whether funding cryonics is socially optimal.

Is cryonics rational?

If you believe that funding cryonics is socially optimal and you have generalized philanthropic concern, then you should fund cryonics. As I say above, I think it very unlikely that funding cryonics is anywhere near socially optimal. For  the sake of definiteness and brevity, in the remainder of this post I will subsequently assume that funding cryonics is far from being socially optimal.

Of course, people have  many values  and generally give greater weight to their own well being and the well being of family and friends than to the well being of unknown others. I see this as an inevitable feature of current human nature and don't think that it makes sense to try to change it at present. People (including myself) constantly spend money on things (restaurant meals, movies, CDs, travel expenses, jewelry, yachts, private airplanes, etc.) which are apparently far from socially optimal. I view cryonics expenses in a similar light. Just as it may be rational for some people to buy expensive jewelry, it may be rational for some people to sign up for cryonics. I think that cryonics is unfairly maligned and largely agree with Robin Hanson's article  Picking on Cryo-Nerds .

On the flip side, just as it would be irrational for some people to buy expensive jewelry, it would be irrational for some people to sign up for cryonics. We should view signing up for cryonics as an understandable indulgence rather than a moral imperative. Advocating that people sign up for cryonics is like advocating that people buy diamond necklaces. I believe that our advocacy efforts should be focused on doing the most good, not on getting people to sign up for cryonics.

I anticipate that some of you will object, saying "But wait! The social value of signing up for cryonics is much higher than the social value of buying diamond necklace!" This may be true, but is irrelevant. Assuming that funding cryonics is orders of magnitude less efficient than the best philanthropic option, in absolute terms, the social opportunity cost of funding cryonics is very close to the social opportunity cost of buying a diamond necklace.

Because charitable efforts vary in cost-effectiveness by many orders of magnitude in unexpected ways, there's no reason to think that the supporting causes that have the most immediate intuitive appeal to oneself are at all close to socially optimal. This is why it's important to  Purchase Fuzzies and Utiltons Separately . If one doesn't, one can end up expending a lot of energy ostensibly dedicated to philanthropy which accomplishes a very small fraction of what one could have accomplished. This is arguably what's going on with cryonics advocacy. As Holden Karnofsky has said, there's  nothing wrong with selfish giving - just don’t call it philanthropy . Holden's post relates to the phenomenon discussed Yvain's great post  Doing your good deed for the day . Quoting from Holden's post

I don’t think it’s wrong to make gifts that aren’t “optimized for pure social impact.” Personally, I’ve made “gifts” with many motivations: because friends asked, because I wanted to support a  resource I personally benefit from , etc. I’ve stopped giving to my alma mater (which I suspect has all the funding it can productively use) and I’ve never made a gift just to “tell myself a nice story,” but in both cases I can understand why one would.

Giving money for selfish reasons, in and of itself, seems no more wrong than unnecessary personal consumption (entertainment, restaurants, etc.), which I and everyone else I know does plenty of. The point at which it becomes a problem, to me, is when you “count it” toward your charitable/philanthropic giving for the year.

[...]

I believe that the world’s wealthy should make gifts that are aimed at nothing but making the world a better place for others. We should challenge ourselves to make these gifts as big as possible. We should not tell ourselves that we are philanthropists while making no gifts that are really aimed at making the world better.

But this philosophy doesn’t forbid you from spending your money in ways that make you feel good. It just asks that you don’t let those expenditures lower the amount you give toward really helping others.

I find it very likely that promoting and funding cryonics for philanthropic reasons is irrational.

Implications

The members of Less Wrong community have uncommonly high analytical skills. These analytical skills are potentially very valuable to society. Collectively, we have a major opportunity to make a positive difference in people's lives. This opportunity will amount to little if we use our skills for things like cryonics advocacy. Remember,  rationalists should win . I believe that we should use our skills for what matters most: helping other people as much as possible. To this end, I would make four concrete suggestions suggestions. I believe that

(A) We should encourage people to give more when we suspect that  in doing so, they would be behaving in accordance with their core values . As  Mass_Driver said , there may be

huge opportunity for us to help people help both themselves and others by explaining to them why charity is awesome-r than they thought.

As I've mentioned elsewhere, according to  Fortune magazine  the 400 biggest American taxpayers donate an average of only 8% of their income a year. For most multibillionaires, it's literally the case that millions of people are dying because the multibillionaire is unwilling to lead a slightly less opulent lifestyle. I'm sure that this isn't what these multibillionaires would want if they were thinking clearly. These people are not moral monsters. Melinda Gates has said that it wasn't until she and Bill Gates visited Africa that they realized that they had a lot of money to spare.

The case of multibillionaires highlights the absurdity of the pathological effects of human biases on people's willingness to give. Multibillionaires are not unusually irrational. If anything, multibillionaires are unusually rational. Many of the people who you know would behave similarly if they were multibillionaires. This gives rise to a strong possibility that they're  presently  exhibiting analogous behavior on a smaller scale on account of irrational biases. 

(B) We should work to raise the standards for analysis of charities for impact and cost-effectiveness and promote effective giving. To this end, I strongly recommend exploring the website and community at  GiveWell . The organization is very transparent and is welcoming of and responsive to  well-considered feedback.

(C) We should conceptualize and advocate high expected value charitable projects but we should be especially vigilant about the possibility of overestimating the returns of a particular project. Less Wrong community members have not always exhibited such vigilance, so there is room for improvement on this point.

(D) We should ourselves donate some money that's optimized for pure positive social impact. Not so much that doing so noticeably interferes with our ability to get what we want out of life, but noticeably more than is typical for people in our respective financial situations. We should do this not only to help the people who will benefit from our contributions, but to prove to ourselves that the analytical skills which are such an integral part of us can help us break  the shackles of unconscious self serving motivations , lift ourselves up and do what we believe in.

Against Cryonics & For Cost-Effective Charity
New Comment
189 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So be it first noted that everyone who complains about trying to trade off cryonics against charity, instead of movie tickets or heart transplants for old people, is absolutely correct about cryonics being unfairly discriminated against.

That said, reading through these comments, I'm a bit disturbed that no one followed the principle of using the Least Convenient Possible World / strongest argument you can reconstruct from the corpse. Why are you accepting the original poster's premise of competing with African aid? Why not just substitute donations to the Singularity Institute?

So I know that, obviously, and yet I go around advocating people sign up for cryonics. Why? Because I'm selfish? No. Because I'm under the impression that a dollar spent on cryonics is marginally as useful as a dollar spent on the Singularity Institute? No.

Because I don't think that money spent on cryonics actually comes out of the pocket of the Singularity Institute? Yes. Obviously. I mean, a bit of deduction would tell you that I had to believe that.

Money spent on life insurance and annual membership in a cryonics organization rapidly fades into the background of recurring expenses, just like car ... (read more)

8[anonymous]
The world is full of poor people who genuinely cannot afford to sign up for cryonics. Whether they spend whatever pittance may be left to them above bare subsistence on charity or on rum is irrelevant. The world also contains many people like me who can afford to eat and live in a decent apartment, but who can't afford health insurance. I'm not so convinced I should be thinking about cryonics at this point either.
8katydee
Short version: It's not that cryonics is one of the best ways that you can spend money, it's that cryonics is one of the best ways that you can spend money on yourself. Since almost everyone who is likely to read this spends a fair amount of money on themselves, almost everyone who is likely to read this would be well-served by signing up for cryonics instead of .
5Eliezer Yudkowsky
Short but not true. Cryonics is one of the ways that, in the self-directed part of your life, you can pretend to be part of a smarter civilization, be the sort of sane person who also fights existential risk in the other-directed part of their life. Anyone who spends money on movie tickets does not get to claim that they have no self-directed component to their life.
1katydee
I don't think I'm suggesting that people don't have a self-directed component to their lives, though I suppose there could be some true "charity monks" or something out there. I'd be surprised, though, since I wouldn't even count someone like Peter Singer as without self-directed elements to his life. I only left the potential exception there because I think there is a chance that someone reading the post will not have sufficient funds to purchase the life insurance necessary for cryonic preservation.
5utilitymonster
I guess I agree that only the specified people can be said to have made consistently rational decisions when it comes to allocating money between benefiting themselves and benefiting others (at least of those who know something about the issues). I don't think this implies that all but these people should sign up for cryonics. General point: [Your actions cannot be described as motivated by coherent utility function unless you do A] does not imply [you ought to do A]. Simple example: Tom cares about the welfare of others as much as his own, but biases lead him to consistently act as if he cared about his welfare 1,000 times as much as the welfare of others. Tom could overcome these biases, but he has not in the past. In a moment when he is unaffected by these biases, Tom sacrifices his life to save the lives of 900 other people. [All that said, I take your point that it may be rational for you to advocate signing up for cryonics, since cryonics money and charity money may not be substitutes.]
4Paul Crowley
Are you suggesting that cryonics advocacy is in any sense an efficient use of time to reduce x-risk? I'd like to believe that since I spend time on it myself, but it seems suspiciously convenient.
4wedrifid
If you are opening the scope to the entire world it would seem fair to extend the excuse to all those who don't even have the bare possible minimum for themselves and also don't live within 100 km of anyone who understands cryonics.
2Eliezer Yudkowsky
Agreed; your correction is accepted.
3XiXiDu
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket: * Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.) * Advanced real-world molecular nanotechnology (the grey goo kind the above could use to mess things up.) * The likelihood of exponential growth versus a slow development over many centuries. * That it is worth it to spend most on a future whose likelihood I cannot judge. * That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above. What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all. ETA Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out. Why aren't Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?

You may be forced to make a judgement under uncertainty.

5XiXiDu
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on? Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.
1wedrifid
You ask a lot of good questions in these two comments. Some of them are still open questions in my mind.

put all my eggs in one basket

Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what's already been written, until it no longer seems at all plausible to you that citing Charles Stross's disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that "all your eggs in one basket" is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).

And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged "cryonics vs. SIAI" tradeoff in the first place.

4XiXiDu
What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I'm donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future. I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That's besides the point though, it's not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
1thomblake
Having read the sequences, I'm still unsure where "a million dollars" comes from. Why not diversify when you have less money than that?

I'm still unsure where "a million dollars" comes from.

It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.

0thomblake
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
4thomblake
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
-1XiXiDu
This is exactly what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air. And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
[-]FAWS110

The figure "a million dollars" doesn't matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn't want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else's eggs is already optimal you shouldn't distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (="million dollars").

3XiXiDu
This might be sound reasoning. In this particular case you've made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn't my original intention when replying to EY. I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? I'm concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact? An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that's besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it. What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, idea
0wedrifid
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on - even accounting for respect for expert opinions. Now I'm curious, here you have referred to "LW" thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example "cryonics is worth a shot" seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
3multifoliaterose
And yes, it seems like my post may have done more harm than good. I was not anticipating such negative reactions. What I said seems to have been construed in ways that were totally unexpected to me and which are largely unrelated to the points that I was trying to make. I take responsibility for the outcome.
3multifoliaterose
Thanks for the response. I'm presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate. For now I'll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.
0[anonymous]
Thanks for the response. I'm presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate. For now I'll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear. As a disclaimer, I'll say that I'm quite favorablyimpreaaed by you. My post may have come across as critical of
0Unknowns
Ordinary people don't want to sign up for cryonics, while they do want to go to movies and get heart transplants. So if multifoliaterose tells people, "Instead of signing up for cryonics, send money to Africa," he's much more likely to be successful than if he tells people, "Instead of going to the movies, send money to Africa." So yes, if you want to call this "unfair discrimination," you can, but his whole point is to get people to engage in certain charities, and it seems he is just using a more effective means rather than a less effective one.
9Eliezer Yudkowsky
I'm saying he'll get them to do neither. Easy way for multi to provide an iota of evidence that what he's doing is effective: Find at least one person who says they canceled a cryo subscription and started sending an exactly corresponding amount of money to the Singularity Institute. If you just start sending an equal amount of money to the Singularity Institute, without canceling the cryo, then it doesn't count as evidence in his favor, of course; and that is just what I would recommend anyone feeling guilty actually do. And if anyone actually sends the money to Africa instead, I am entirely unimpressed, and I suggest that they go outside and look up at the night sky for a while and remember what this is actually about.
2dclayh
Even less than signing up for cryonics do most people want to murder their children. Do expect that telling them "Instead of murdering your children, send aid to Africa (or SIAI)" will increase the amount they send to Africa/SIAI?
3Unknowns
That isn't relevant because murdering your children doesn't cost money.
3dclayh
I think it does, since you'll probably want to buy weapons, hire an assassin, hire a lawyer, etc. But you can change the example to "Send money to al-Qaeda" if you prefer.
3Spurlock
I'm willing to bet that the number of LW readers seriously considering cryonics greatly outweighs the number seriously considering murdering their kids OR funding al-Qaeda. For the general population this might not be so, but as a LW wrong post it seems more than reasonable to contrast with cryonics rather than terrorism.

Incidentally, heart transplants and cryonics both cost about the same amount of money... does the "it's selfish" argument also apply to getting a heart transplant?

Most of multifoliaterose's criticisms of cryonics apply to the majority of money spent on medical treatments in rich nations.

9multifoliaterose
Getting a heart transplant has instrumental value that cryonics does not. A heart transplant enables the recipient to continue being a productive member of society. If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients' productivity. By way of contrast, if society gets to the point where cryopreserved people can be restored, it seems likely that society will have advanced to the point where such people are much less vital to society. Also, the odds of success for a heart transplant are probably significantly higher than the odds of success for cryorestoration. Edit: See a remark in a post by Jason Fehr at the GiveWell Mailing List: I don't think that having Bill Clinton cryopreserved would be nearly as valuable to society as the cardiovascular operations that he underwent were.

If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients' productivity.

So, then, should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?

I think you're holding cryonics to a much higher standard than other expenditures.

1RichardChappell
Distinguish personal morality from public enforcement. In a liberal society our personal purchases should (typically) not require anyone else's permission or "approval". But it still might be the case that it would be a better decision to choose the more selfless option, even if you have a right to be selfish. That seems just as true of traditional medical expenditures as it does of cryonics.
8James_Miller
But if while President Bill Clinton knew he was going to be cryopreserved he might have caused the government to devote more resources to artificial intelligence research and existential risks.
1HughRistik
Doesn't successful cryopreservation and revival have a good chance of doing the same, and for longer?
0lsparrish
A life kept active and productive in the here and now might be more valuable in some respects than one that is dormant for the near future, given that more other individuals exist in the far future who would have to compete with the reanimated individual.
-1Unknowns
One of the defects of the karma system is that replies to comments tend to get less votes, even when they're as good as the original comment. Here CronoDAS's comment is at 9, and the response at only 4, even though the response does a very good job of showing that the cases mentioned are not nearly equivalent.
8wedrifid
I consider Crono's comment more insightful than multi's and my votes reflect my position.
0Unknowns
Would you disagree that the differences mentioned by multifoliaterose are real? Anyway, in terms of the general point I made, I see the same thing in numerous cases, even when nearly everyone would say the quality of the comments is equal. For example you might see a parent comment at 8 at a response at 2, maybe because people are less interested, or something like that.

Would you disagree that the differences mentioned by multifoliaterose are real?

Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.

0multifoliaterose
If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that's fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.
3Vladimir_Nesov
You are making an error by not placing your own well-being into greater regard than well-being of others. It's a known aspect of human value.
4WrongBot
Err, are you saying that his values are wrong, or just that they're not in line with majoritarian values?

For one thing, multifoliaterose is probably extrapolating from the values xe signals, which aren't identical to the values xe acts on. I don't doubt the sincerity of multifoliaterose's hypothetical resolve (and indeed I share it), but I suspect that I would find reasons to conclude otherwise were I actually in that situation. (Being signed up for cryonics might make me significantly more willing to actually refuse treatment in such a case, though!)

0multifoliaterose
If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.
2orthonormal
To put on my Robin Hanson hat, I'd note that you're acknowledging this level of selflessness to be a Far value and probably not a Near one. I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn't seem quite accurate to declare that your Far values are your "true" ones and that the Near ones are to be discarded entirely.
0multifoliaterose
So, I think that the right way to conceptualize this is to say that a given person's values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now. The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.
2orthonormal
The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they're subject to their own biases.
0multifoliaterose
Can you give an example of a bias which arises from Far values? I should say that I haven't actually carefully read Hanson's posts on Near vs. Far modes. In general I think that Hanson's views of human nature are very misguided (though closer to the truth than is typical).
2NancyLebovitz
Willingness to wreck people's lives (usually but not always other people's) for the sake of values which may or may not be well thought out. This is partly a matter of the signaling aspect, and partly because, since Far values are Far, you're less likely to be accurate about them.
0multifoliaterose
Okay, thanks for clarifying. I still haven't read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it's important to strike a balance between Near vs. Far. I don't really understand what part of my comment orthogonal is/was objecting to - maybe the issue is linguistic/semantic more than anything else.
1Vladimir_Nesov
I'm saying that he acts under a mistaken idea about his true values. He should be more selfish (recognize himself as being more selfish).
4multifoliaterose
I see what I say about my values in a neutral state as more representative of my "true values" than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so. Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.
2Vladimir_Nesov
What you would do has little bearing on what you should do. The above argument doesn't argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.
2steven0461
By what factor? Assume a random stranger.
1Vladimir_Nesov
Maybe tens or thousands, but I'm as ignorant as anybody about the answer, so it's a question of pulling a best guess, not of accurately estimating the hidden variable.
6steven0461
I don't understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don't understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.
6Vladimir_Nesov
Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what's available to me and not just what they can get hold of themselves, but it doesn't suggest equal parts for all, let alone equal to what's reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don't apply. What about personal identity? What do you mean by "prescribing the same action" based on cooperation, when the question was about choice of own vs. others' lives? I don't see a situation where cooperation would make the factor visibly closer to equal. I'm not "preaching egoism", I'm being honest about what I believe human preference to be, and any given person's preference in particular, and so I'm raising an issue with what I believe to be an error about this. Of course, it's hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it's not my game here. Preference is not for grabs. I don't see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I'd expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).
3WrongBot
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations. There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don't see how your theory accounts for their behavior.
-6Vladimir_Nesov
5wedrifid
The difference is real. Whether it is also the real reason is another question.
3Airedale
It rarely bothers me when insightful original comments are voted up more than their (more or less) equally insightful responses. In my view, the original comment often “deserves” more upvotes for raising an interesting issue in the first place and thereby expanding a fruitful discussion.
-3Jayson_Virissimo
A heart transplant has a much higher expected utility than cryonics. Could that be a major cause of the negative response?
5lsparrish
Disagree. A heart transplant that adds a few decades is less valuable than a cryopreservation that adds a few millennia. Also, heart transplants are a congestion resource whereas cryonics is a scale resource.
0Jayson_Virissimo
So what? The value of winning the lottery is much higher than working for the next five years, but that doesn't mean it has a higher expected utility. The expected value of an act is the sum of the products (utilities x probabilities). Unless you think a heart transplant is just as probable to work as cryonics, then you must consider more than simply the value of each act.
2lsparrish
To offset a difference in living 100 times as much longer (even not accounting for other utilities like quality of life), it takes 100 times the probability. I don't think cryonics is 100 times less likely to work than a heart transplant.

If you want to persuade me to spend less of my money on myself and more on trying to save the world, surely you should start with frippery like nice sandwiches or movies, rather than something that's a matter of life and death?

8Unknowns
It seems reasonable to me that multifoliaterose would start with something that people aren't much naturally inclined to anyway, rather than things like sandwiches, because he's much more likely to succeed in the case of something (like cryonics) that there isn't much natural human tendency for.
3multifoliaterose
Some people attach more value to nice sandwiches and movies and other people attach more value to being cryopreserved. If you value being cryopreserved more than nice sandwiches and movies, then if you decide spend more money on trying to save the world, obviously the first expenses that you should cut are nice sandwiches and movies. The point of my post is that it's inappropriate to characterize signing up for cryonics as something that one is doing to make the world a better place. I have no problem with people signing up for cryonics as long as they recognize that it's something that they're doing for themselves.

What's weird is that people are driven to compare cryonics to charity in a way they're not when it comes to other medical interventions, or theatre tickets. I think Katja Grace explains it plausibly.

I have no problem with people signing up for cryonics as long as they recognize that it's something that they're doing for themselves.

In your version of the story, what mistake am I making that causes me to go around urging other people to sign up for cryonics?

2Spurlock
I think you're unfairly equating "signing up for cryonics" with "urging others to sign up for cryonics". If I go see a movie, I do so because I personally want to enjoy it, not out of any concern for whether it promotes good in the wider world (maybe it does, but this isn't my concern). I can later go on to recommend that movie to friends or to the internet in general, but that's a separate act. Maybe your particular reasons for signing up are at least partially for the greater good (perhaps so you can wake up and continue the work on FAI if it remains undone), but it seems likely that most people sign up because it's something they want for themselves.
1HughRistik
"Signing up for cryonics" (and talking about it) isn't entirely separable from "urging others to sign up for cryonics," because we are a species of monkeys. Monkey see, monkey do.
6lsparrish
I disagree as more people signing up for cryonics makes cryonics more affordable (and thus evens out the unfairness of premature death) and also gives large numbers of people a vested interest in the future. Cryonics on a small scale has unfavorable features that it would lack on a larger scale, so you need to be careful not to conflate the two. Note that as far as PR for existential risk goes, you can't beat cryonics for giving people a legitimate self-interested reason to care.

after one is revived the human race could go extinct

Given the tech level required for revival, I'd assign a pretty low probability of getting revived before we're through the window of vulnerability.

If enough people sign up, cryonics can become a cost-effective way of saving lives. The only way to get there is to support cryonics.

In estimating cost-effectiveness of signing up, you have to take into account this positive externality. This was also an argument in Hanson's Cryonics As Charity, which you didn't properly discuss, instead citing current costs of cryonics.

0multifoliaterose
As I said in my post, it may be possible to construct a good case for signing up for cryonics or supporting cryonics being comparable to donating to or supporting cost-effective charities. At present I think this is unlikely. In regard to your points, see the second half of my response to James_Miller's comment.

[...] there's still the question of whether at the margin advocating for cryonics is a worthwhile endeavor. My intuition is that we're so far away from having a population interested in signing up for cryonics (because of the multitude of irrational biases that people have against cryonics) that advocating for cryonics is a very inefficient way to work against existential risk.

The margin has to take into account all future consequences of the action as well, not just local consequences. Again, a concrete problem I have with your post is essential misrepresentation of Hanson's post by quoting current costs of cryonics, and not mentioning the argument for lowering of costs. This you haven't answered.

3multifoliaterose
Yes, this is a good point. I have somewhere to go and so don't have time to correct this point immediately, but for now I will add a link to your comments in my post. Thanks.

You are telling me that Cryonically suspending myself is less charitable than donating the same resources to an efficient charity? Um... yes?

I don't think this post contains a non-trivial insight. I found the normative presumptions interspersed with the text distasteful. Multi also presents a misleading image of what best represents the values of most people.

4RHollerith
Yes, but many of the participants on this web site share Multifoliate's interest in philanthropy. In fact, the site's subtitle and mission statement, "refining the art of human rationality," came about as a subgoal of the philanthropic goals of the site's founder. I found it a good answer to the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.

the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.

Is that belief really common around here? Though I'm inclined to make an effort to get Hitchens to sign up, I think of that effort as self-indulgence in much the same way as I'd think of such efforts for those close to me, or my own decision to sign up.

4RHollerith
OK, maybe "common belief" is too strong. Change it to, "make sure no one here is under the illusion that cryonics advocacy is an efficient form of philanthropy, rather than a way to protect one's own interests while meeting like-minded people and engaging in an inefficient form of philanthropy, though I personally doubt that it decreases x-risks."
5lsparrish
I think there are different approaches to cryonics. Advocating global or wide-scale conversion to cryonics is a philanthropic interest. It is very different from a focus on getting yourself preserved using existing organizations and on existing scales -- though they are certainly compatible and complementary interests. To some extent I support seeing your own preservation as self-interest, under the assumption that this means you do not deduct it from your mental bank account for charitable giving (i.e. you'll give the same amount to starving kids and life-saving vaccines as you did before signing up). However it is a huge mistake to claim that it is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.
3RHollerith
OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right? If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words "huge" and "hugely" are justified -- but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later. The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years -- they need to travel to Africa or what not and see the faces of the people they have helped -- or at least they need to know that if they were to travel to Africa or what not, they would -- and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to -- their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future). But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very
7lsparrish
I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives. Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that. As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don't think contraception is a crime, for example. The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it's not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)
1RHollerith
I have bookmarked your comment and will reflect on it. BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.
3Paul Crowley
I wouldn't necessarily say that it's the most effective way to do x-risks advocacy, but it's one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I'm looking forward to reading more on the case against from you.
0[anonymous]
I'm worried about cryonics tainting "the whole general field of thinking seriously about the future" by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention. I've never heard of someone coming to LW through an interest in cryonics, though I'm sure there are a few cases.
0multifoliaterose
You're one of the few commentators who understands the point of my post.
5thomblake
Lots of people here understand the point of your post. Some of us think it is evil to discourage folks from doing cryonics advocacy, since it is likely the only way to save any of the billions of people that are currently dying. Personally, I'm not a cryonics advocate. But know your audience, and if you've noticed that most the people around here don't seem to understand something, it's probably a good time to check your assumptions and see what you've missed.
3Paul Crowley
This comes across as if you're miffed at the commentators rather than at yourself - is that what you mean?
0multifoliaterose
I'm both irritated by those commentators who responded without taking the time to carefully reading my post and disappointed in myself for failing to communicate clearly. On the latter point, I'll be revising my post as soon as I get a chance. (I'm typing from my iPod at the moment).
2wedrifid
I have an interest in philanthropy (and altruism in general). I note Multi's post can be have has a positive influence on my own personal wellbeing. I know I aren't going to be sucked in to self destruction - the undesirable impact is suffered by others. Any effort spent countering the influence would be considered altruistic.
-3multifoliaterose
If you don't have any interest in philanthropy then my post was not intended for you, and I think that it's unfortunate that my post increased LessWrong's noise-to-signal ratio for you. If you have some interest in philanthropy, then I would be interested in knowing what you're talking about when you say:

If you don't have any interest in philanthropy then my post was not intended for you

Given that your argument only rules out cryonics for genuine utilitarians or altruists, it's quite possible to have some concern for philanthropy and yet enough concern for yourself to make cryonics the rational choice. You're playing up a false dilemma.

9wedrifid
I like philanthropy, and not your sermon. I don't consider this post noise. It is actively bad signal. There is a universal bias that makes it difficult to counter "people should be more altruistic" claims of any kind. 'Should' claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that 'analytical skills' and rational ability in general require such sacrifice. The post fits my definition of 'evil'.
-1multifoliaterose
Nope, you've misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity. See my response to ciphergoth for my position. If there's some part of my post that you think that I should change to clarify my position, I'm open to suggestions. Downvoted for being unnecessarily polemical.
9thomblake
That's exactly what you're saying, as far as I can tell. Are you not advocating that people should give money to charity instead of being cryopreserved? While I think charity is a good thing, I draw the line somewhere shy of committing suicide for the benefit of others.
1multifoliaterose
My post is about how cryonics should be conceptualized rather than an attempt to advocate a uniform policy of how people should interact with cryonics. Again, see my response to ciphergoth. For ciphergoth, cryonics may be the right thing. I personally do not derive fuzzies from the idea of signing up for cryonics (I get my fuzzies in other ways) and I don't think that people should expend resources trying to change this.
6wedrifid
Perhaps, but I have not misunderstood the literal meaning of the words in the post. Yet surprisingly necessary. The nearly ubiquitous pattern when people object to demands regarding charity is along the lines of "it's just not interesting to you but for other people it is important" or "it's noise vs signal". People are slow to understand that it is possible to be entirely engaged with the topic and think it is bad. After all, the applause lights are all there, plain as day - how could someone miss them?
-4multifoliaterose
You may be right, on the other hand you may be generalizing from one example. Claims that an author's view of human values is misleading should be substantiated with evidence.
4wedrifid
"The CEV of most individuals is not Martyrdom" is not something that I consider overwhelmingly contentious.
7WrongBot
Nitpick: Individuals don't have CEV. They have values that can be extrapolated, but the "coherent" part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.

Individuals don't have CEV.

In this instance I would be comfortable using just "EV". In general, however, I see the whole conflict resolution between agents as a process that isn't quite so clearly delineated at the individual.

Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.

He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.

1Vladimir_Nesov
Since we have no idea what that entails and what formalizations of the idea are possible, we can't extend moral judgment to that unclear unknown hypothetical.
2wedrifid
I fundamentally disagree with what you are saying, and object somewhat to how you are saying it.
1Vladimir_Nesov
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn't done, it's like expressing a belief about the color of God's beard.
5wedrifid
You are mistaken. Read again. I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available. Also note that the whole "extend moral judgment" concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V. Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
2Vladimir_Nesov
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use "morality" in this sense. I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be "stop Eliezer", but I don't see how with present state of knowledge one can have any certainty in the matter. And you did say that it's "quite likely" that CEV-derived AGI is undesirable: (Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
2wedrifid
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
0[anonymous]
I don't quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival. If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed. But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
2wedrifid
By winning the war before it starts or solving cooperation problems. The competition you refer to isn't prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation. CEV would give that result. The 'coherence' thing isn't about sharing. CEV may well decide to give all the mass of the universe to C purely because they can't stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being 'included' is not intrinsically important.
0Sniffnoy
The singleton sets of individuals do...
-3multifoliaterose
I don't think that anything in my post advocates martyrdom. What part of my post appears to you to advocate martyrdom?
0[anonymous]
To put it in the visceral language favored by cryonics advocates, you're advocating that people commit suicide for the benefit of others.

The tone of this post really grated on my ears, especially the last section where the words "we should" were used repeatedly. Syntactically "we" must refer to either "members of Less Wrong community" or "rationalists", but those sentences only make semantic sense if "we" actually refers to "utilitarians". I think I feel offended from being implicitly excluded from this community for not being a utilitarian.

Do any of my posts have this kind of problem? Being on the receiving end of this effect makes me want to make sure that I don't unintentionally do it to anyone else.

1multifoliaterose
Sorry to hear that my post grated on you. This was totally unintended. The 'we' in the last section is intended to be "Less Wrong posters who have some generalized/abstract concern for the well being of others. I believe that such people should expend some (not necessarily a lot of) resources on pure social impact because of the "purchase utilons & fuzzies separately" principle.
1steven0461
Or if 1) "should" refers to true/informed preferences rather than currently endorsed preferences and 2) your true/informed preferences would be utilitarian. That distinction seems to be going out of fashion, though.
2Wei Dai
It seems obvious from context that multifoliaterose was assuming agreement with utilitarian values and making his arguments about what "we should" do based on that assumption, and not claiming that the true/informed preferences of everyone in this community would be utilitarian. (The post does not explicitly claim that, nor contains any arguments that might support the claim.) Why do you say that?
2steven0461
Fair enough; I agree it was clearly not the reading multifoliaterose actually intended. I read multifoliaterose as saying to the extent that our values are utilitarian, cryonics doesn't fulfill them well. I guess it's an impression I got from reading many conversations here.
2Wei Dai
I would expect that most conversations involve currently endorsed preferences, simply because it's much easier to discuss what we should do now given what we currently think our values are, than to make any nontrivial progress towards figuring out what our values would be if we were fully informed. I don't think that constitutes evidence that people are forgetting the distinction (if that's what you meant by "going out of fashion"). I'd be interested to know if you had something else in mind.
0multifoliaterose
Your original reading of my claim is the message that I intended to convey.
0jimrandomh
How is (2) not a definition of a utilitarian?
7Nick_Tarleton
A utilitarian, in common usage, is someone who currently endorses utilitarianism. (I share Steven's desire to see the informed/currently-endorsed distinction used more consistently.)
-2jimrandomh
A utilitarian would endorse a non-utilitarian value system if doing so maximized utility.
6mattnewport
A utilitarian would endorse a non-utilitarian value system if doing so maximized utilitarian utility, which is really the crux of the debate. The word utilitarian is thrown around a lot here without clearly defining what is meant by it but I would guess that most of the non-utilitarians (like myself) here take issue primarily with the agent neutrality / universality and utility aggregation (whether averaging, summing or weighted summing) aspects commonly implied by utilitarianism as an ethical system rather than with the general idea of maximizing utility (however defined).
0Nick_Tarleton
Another crucial terminological distinction. Thanks.

I'd like to see someone post a critical review of those GiveWell estimates. Surely GIveWell isn't the most independent source for such numbers, right?

6CronoDAS
GiveWell is an independent evaluator, or at least the closest thing that exists in the world of philanthropy.
4Wei Dai
I'm confused by Robin Hanson's comment and the fact that it's voted up to 7. Is there some reason to suspect that GiveWell's reputation as an independent evaluator of charities is undeserved, or was Robin making some other point?
4Vladimir_Nesov
Simple: the upvoters would also like to see a critical review of GiveWell estimates.
5Wei Dai
If they want to see a critical review of GiveWell's estimates, then they need to make a case that doing such a review is a good use of someone's time and resources, and also give some indication of what they consider to be an acceptable critical review. I mean, by all indications GiveWell is itself an independent, critical, reviewer of charities, so if they're not satisfied with it, why would they be satisfied with any hypothetical meta-reviewer?

Revitalization is not a guarantee of a very long life - after one is revived the human race could go extinct.

Extinction is not something that just happens on a rainy day. It requires everyone to die before a new generation is there to take over in a basic sense. Either buy a big scale event or by such massive changes in the environment that we all get replaced. The chance for that to happen soon after the technique for revival is available and used is slim. The whole 'humanity might go extinct' argument look rather FAR to me. People have children and e... (read more)

-1multifoliaterose
I made the point that you quote because I was anticipating an argument of the type "but cryopreservation has really high expected value because if it works the person frozen can live for many billions of years!" I agree with what you say.
4MartinB
Switch the 'b' for a 'm', or lets just say its a thousand years. There is no save way to distinguish these. It would suck to get revived and then die from natural causes a few years later, but considering the effort needed to get awoken in the first place that does not seem likely. You probably saw Aubrey deGreys lecture on the repeated application of enhancements.
1lsparrish
Heck what if it only doubles the lifespan instead of multiplying it by insanely high numbers? If you could place everyone who is currently alive (including those suffering terminal illness) in a situation where they live exactly twice as long as a healthy person today, wouldn't you? Wouldn't that have the same or greater moral utility as saving the lives of everyone on earth from a massive meteorite strike or some such? (Assuming a few dozen breeding humans survive so it's not an extinction event.) Cryonics could potentially accomplish this, with (according to Robin) a 5% chance. But only if it is adopted globally and soon (i.e. before such a time as they would be saved anyway, or are dead already). One possible approach you can take for maximized altruism is simply to support global cryonics without signing up for the small-scale kind. Personally I see signing up myself as a way to lead by example (though I haven't done it yet). Cryonics is in its "early adopter" stage. The sooner it rolls out for mass production, the sooner its real benefits can be realized.
0DanielLC
There's a difference between increasing the lifespans of people and increasing the numbers. The difference between someone living forever and someone living 20 years can be made up for by having an extra kid every 20 years. The bottleneck is how many people the world can support, not how many are born. Also, saving the lives of everyone on Earth implies allowing them to have kids, and their kids to have kids. It's saving the total number of people the Earth will support, not just the ones alive at the moment.
0MartinB
Either number is arbitrary. There is no particular reason for a life to end at some specific point. And many problems can be solved. You can even specify: 'Only revive me if life expectancy goes over n years'

The whole point of arguing that cryonics is a charity, a social good, etc. is because it tends overwhelmingly to be processed as a selfish act. We don't get warm fuzzies for purchasing cryonics the way we do when recycling plastic bottles or whatever. It's not using up the fuzzy supply (or demand rather). It's like cryonics has a huge blinking neon light that says SELFISH on it. But it's not so overwhelmingly selfish in reality -- it is the one thing that the entire world could jump on and live forever with. At least, I don't see any compelling reason to t... (read more)

1multifoliaterose
Thanks for making this comment! I appreciate that you took the time to think about my points and explain where you disagree. I'd be interested in chatting with you sometime - feel free to PM me with your email address if you'd like to correspond.

Re: "from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity".

Sure - but utilitarianism just seems to be a totally bonkers humans moral system to folk like me. Utilitarianism doesn't even seem to be a very good way of signalling unselfishness - because the signal is so unbelievable. Anyway, if you are assuming a utilitarianism framework, maybe consider linking to some utilitarianism advocacy.

0multifoliaterose
What I mean by "from a utilitarian point of view" is "from the point of view of granting equal ethical consideration to qualitatively similar beings" or something like that. I'm open to suggestions for how I might rephrase more satisfactorily

This post seems to be Eliezer's own counter/qualification to Purchase Fuzzies and Utilons Separately. It seems very relevant here, and I'm surprised nobody has brought it up yet. Here's a quote:

If we're operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more) - or at least, that most potential supporters of interest fit this description - then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

"But," y

... (read more)
0multifoliaterose
Thanks for pointing the linked post out, I had not seen it before. I'm aware of the points raised therein - I don't hold actually hold rigidly to the Purchase Fuzzies and Utilons policy. In my top level post I was making a subjective judgment call that cryonics advocacy is so far from being cost-effective that it shouldn't be on the table as a utilon-producing activity. But I may be wrong about this. In particular, I think that a recent comment by lsparrish explaining his position is well considered. I look forward to talking more about this matter more with him sometime.

You (appear to) claim too much for your argument. The only pro-cryonics argument that this counters is Robin's claim of cryonics as efficient altruism, and it doesn't seem to me that any of the other cryonics posts you cited depend on this claim.

You ought to make it clear that Robin's post is the only one you object to on these grounds.

1multifoliaterose
Would this issue be resolved to your satisfaction if I changed the title of the article to "against altruistic cryonics..." ?
6orthonormal
That would help, but the introduction needs work too. It feels like you have two distinct posts awkwardly glued together: one pointing out that cryonics is no more selfish than ordinary selfish expenditures, and another pointing out that it is not the most efficient altruistic use of your money. I'm not sure how they might be better integrated.
1Paul Crowley
If that's your point, then it would certainly help to make it clear in the heading. Does wanting to save those close to you count as altrusim?

I have edited the main post in response to many of the comments below.

Lots of money spent helping poor people in poor countries has done more harm than good. You wrote: "GiveWell estimates that VillageReach and StopTB save lives at a cost of $1,000 each." I bet at least $100 of each $1000 goes indirectly to dictators, and because the dictators can count on getting this money they don't have to do quite as good a job managing their nation's economy. Also, you need to factor in Malthusian concerns.

Poor people in poor countries might be better off today if rich countries had never given them any charity.

If lots of people signup for cryonics the world would become more concerned about the future and devote more resources to existential risks.

I often find this sort of argument frustrating. Are you making a serious case that the net effects are that harmful? What are your betting odds? Why not donate to things that don't generate rents to steal, e.g. developing cheaper crops and treatments for tropical diseases? Or pay for transparency/civil society/economic liberalization work in poor countries?

Many people just like to throw up possible counter-considerations to blunt the moral condemnation, and then go on with what they were doing, without considering any other alternatives or actually trying to estimate expected values in an unbiased way. One should either engage on the details of the altruism, or focus on the continuum of selfish expenditures, and note the double-standards being applied to cryonics.

I agree that widespread cryonics would have beneficial effects in encouraging long-term thinking. Edit: and even small changes in numbers could significantly increase the portion of people paying attention to existential risk and the like, given how small that pool is to start with.

"Are you making a serious case that the net effects are that harmful?"

Yes. Although development isn't my specialty, I'm a professional economist who has read a lot about development. The full argument I would make is similar to the one that supports the "Resource Curse" which holds "The resource curse (also known as the paradox of plenty) refers to the paradox that countries and regions with an abundance of natural resources, specifically point-source non-renewable resources like minerals and fuels, tend to have less economic growth and worse development outcomes than countries with fewer natural resources. This is hypothesized to happen for many different reasons, including a decline in the competitiveness of other economic sectors (caused by appreciation of the real exchange rate as resource revenues enter an economy), volatility of revenues from the natural resource sector due to exposure to global commodity market swings, government mismanagement of resources, or weak, ineffectual, unstable or corrupt institutions (possibly due to the easily diverted actual or anticipated revenue stream from extractive activities)." (From Wikipedia)

"What... (read more)

I agree that the resource curse elements of aid exist (and think it plausible that 'development aid' has had minimal or negative effects), but they have to be quite large to negate the direct lifesaving effects of the best medical aid, e.g. vaccines or malarial bed nets.

Cheaper crops harm farmers.

The Green Revolution did not harm poor Indians, by a very wide margin. I'm talking about developing new strains, not providing food aid purchased from rich-country farmers.

Treatments for tropical diseases cause Malthusian problems, must be administered by medical staff dictators approve of in buildings dictators allow to be built.

There is some bribery and theft bound up with medical aid too, aye. But the Malthusian argument is basically saying better that they die now to expedite growth later? Really?

"But of the vast increase in the well-being of hundreds of millions of people that has occurred in the 200-year course of the industrial revolution to date, virtually none of it can be attributed to the direct redistribution of resources from rich to poor."

The Green Revolution, smallpox eradication, financial support for vaccination and malaria control all involved rich country denizens spending on benefits for the poor. Hundreds of millions of lives involved. The benefits of economic growth dwarf the benefits of aid, but the latter are not negligible.

Rich countries used aid dollars to pressure African countries to stop using DDT. Aid has probably increased the number of poor people who have died from Malaria.

Most of the agricultural improving techs were developed for profit not charity reasons, although dwarf wheat is an important exception that supports your viewpoint.

Eliminating smallpox wasn't really done for chartable reasons, meaning that rich countries had an incentive to be efficient about it. It also caused the USSR to develop smallpox bio-weapons.

Africa's main problem is low economic growth caused mostly by its many "vampire" governments. Aid feeds these vampires and so does create negative effects large enough "to negate the direct lifesaving effects of the best medical aid, e.g. vaccines or malarial bed nets."

I'm not claiming Malthusian factors should dominate moral considerations, just that they need to be taken into account.

Although I can't prove this, I believe that the vast sums of money spent on foreign aid to poor nations have done much to convinced the elite of poor nations that their nations' poverty is caused by unjust distribution of the world's resources not the elites' corruption and stupid economic policies.

8CarlShulman
James, the discussion was about things that one can donate to as a private individual looking to have a maximal positive impact, using resources like GiveWell and so on. So arguments that governments doing foreign aid are often not trying to help or serving crazy side-concerns (e.g. with DDT, although that's often greatly exaggerated for ideological reasons) aren't very relevant. I gave smallpox as an example of a benefit conferred to poor people by transferring resources (medical resources) to their countries. I agree about sloppiness on the part of governments and most donors, but that doesn't mean that those rare birds putting effort into efficacy can't attain some. I agree that Africa's main problem is low economic growth, and that vampire states play a key role there (along with disease, human capital, etc). You never answered my earlier question, "why not fund anti-corruption/transparency/watchdog groups?" Would you guess that the World Bank Doing Business Report saves one net life per $1000 of expenditure?
6James_Miller
"why not fund anti-corruption/transparency/watchdog groups?" I don't think it would do any good, although I don't know enough about these groups to be certain of this. I believe that on average charity given to poor people in poor countries does more harm than good, and I don't think most people (myself included) are smart enough (even with the help of GiveWell) to identify situations in which giving aid helps these people in large part because of the negative unintended indirect effects of foreign charity. In contrast, I think that technological spillovers hugely benefit humanity and so while spending money on cryonics isn't the first best way of helping humanity it is better than spending the money on most types of charities including those designed to help poor people living in corrupt dictatorships.
4mattnewport
I agree. It seems likely to me that for-profit investment in developing new technologies (and commercializing existing technologies on a large scale) has had a greater positive impact on human welfare than charitable spending over the last few hundred years. Given that it has also made a lot of early investors wealthy in the process (while no doubt also destroying the wealth of many more) and likely has a net positive expected return on investment I personally like it as a way to allocate some of my resources.
7Douglas_Knight
As far as I have been able to determine, this is false.
8Paul Crowley
See Deltoid's DDT category for more on this.
0James_Miller
See http://townhall.com/columnists/JohnStossel/2006/10/04/hooray_for_ddts_life-saving_comeback http://web.worldbank.org/WBSITE/EXTERNAL/COUNTRIES/AFRICAEXT/EXTAFRHEANUTPOP/0,,contentMDK:20905156~pagePK:34004173~piPK:34003707~theSitePK:717020,00.html http://www.fightingmalaria.org/article.aspx?id=936 http://www.fightingmalaria.org/article.aspx?id=137
9satt
I haven't yet looked at your last three links, but the first is a tendentious polemic. Taking a look... This claim is true only in the limited sense that the WHO has tried to stop indiscriminate DDT spraying. But as far as I know, the WHO has never handed down a blanket ban on DDT. There isn't a date on Stossel's editorial, but going by the URL it was published in October 2006. Official WHO documents predating that condone the use of DDT under limited circumstances. For example, this archived copy of a WHO FAQ on DDT from August 2004 says, "WHO recommends indoor residual spraying of DDT for malaria vector control", citing this 2000 report from the WHO Expert Committee on Malaria. On page 38 (p. 50 in the PDF), the 2000 report "endorsed" the conclusion of a still earlier 1995 study group that "DDT may be used for vector control, provided that it is only used for indoor spraying, it is effective, the WHO product specifications are met, and the necessary safety precautions are applied for its use and disposal". I don't see how anyone can honestly call DDT "benign" unless they're ignorant of the evidence for its negative ecological effects. At any rate, Stossel's decision to solely blame environmentalists & government busybodies for DDT's unpopularity is disingenuous. Increasing resistance to DDT is another (I would have thought obvious) reason. Which is basically meaningless without quantitative evidence. There are always a couple of scientists somewhere who fail to replicate findings that some chemical is dangerous. Also, the EPA ban does not appear to have been a complete ban; this pro-DDT article points out that "the public health provisions of the 1972 US delisting of DDT have been used several times after 1972 in the US to combat plague-carrying fleas, in Colorado, New Mexico and Nevada". Presumably Stossel's implying that the EPA should therefore have just regulated the amount of DDT used, instead of just banning it. But the EPA did allow some uses of DDT af
6James_Miller
You make some good points.
3Douglas_Knight
Of course I've seen lots of articles like that. The first article opens with "the World Health Organization (WHO) has ended its ban on DDT" which is simply a lie. The third article makes the less verifiable: but I have never seen evidence of this claim. In fact, I have seen it confabulated on the spot by people caught in the first lie.
5Douglas_Knight
If one believes that it is better, for the individual or the group, to die in war or acute famine than to live malnourished, then peace and a stable food supply may be bad (but then one should apply the reversal test and ask such people whether they support war and high variance food supply). But disease is not like war or acute famine. The survivors are often permanently affected, in many ways like the malnourished. So many arguments that consider malthusian conditions should support medical aid.
4NancyLebovitz
Do gambling and tourism count as resource curses? They're renewable resources, but they don't seem to do localities much good.
6James_Miller
No because an incompetent or evil government can lose them as a source of revenue. Zimbabwe, for example, has no doubt lost many tourist dollars because of state violence. This loss might be deterring some other African governments from engaging in too much state violence. In contrast, governments often get more economic aid if they engage in destructive economic policies.
3Oligopsony
Theoretically, a particularly beautiful landscape or cultural affinity for some profession might lead to Dutch Disease effects. The renewability of the resource isn't really the relevant factor; it just happens to be that most supply shocks of the required magnitude consist of natural resource endowments. Service industries like gambling and tourism don't generally have these effects, though. What they do have is typically lower wages, greater seasonality, and less technology spillover effects than manufacturing.
1multifoliaterose
See my responses to Vladimir_M's comments here I think (but am not sure) that you're right about this, but even if you are there's still the question of whether at the margin advocating for cryonics is a worthwhile endeavor. My intuition is that we're so far away from having a population interested in signing up for cryonics (because of the multitude of irrational biases that people have against cryonics) that advocating for cryonics is a very inefficient way to work against existential risk. I'd be interested in any evidence that you have that •Signing up for cryonics motivates people to devote resources to assuaging existential risk. •It's feasible to convince a sufficiently large portion of the population to sign up for cryonics so that cryonics is no longer a fringe thing which makes people in the general population uncomfortable around cryonics sign-ups.
2James_Miller
"I'd be interested in any evidence that you have" A vastly disproportionate percentage of the people who have signup for cryonics are interested in the singularity and have helped the SIAI through paying for some of their conferences. This, I admit, might be due to correlation rather than causation.
0multifoliaterose
Your point is valid, but you seem to have dodged the thrust of my main post. Do you really think that cryonics advocacy is comparable in efficacy to the most efficient ways of working against existential risk? If not, you should not conceptualize cryonics advocacy as philanthropic.
1James_Miller
"Do you really think that cryonics advocacy is comparable in efficacy to the most efficient ways of working against existential risk?" No, but I do think spending money on cryonics probably increases expenditures on existential risk. Cryonics and existential risk spending are complements not substitutes. Also, your not first best argument against cryonics also applies to over 99.999% of human expenditures and labors.
[-][anonymous]00

I have edited the main post in response to many of the comments below.

[-][anonymous]00

I have edited the main post in response to many of the comments below.

[-][anonymous]00

I have edited the main post in response to many of the comments below.

[-][anonymous]00

I have edited the main post in response to many of the comments below.

[-][anonymous]00

I have edited the main post in response to many of the comments below.

[-][anonymous]00

If part of the point of cryonics advocacy is to get people thinking seriously about the future, I'd like to see more LW material aimed at present and future cryonicists explaining to them why as a cryonicist they should start thinking seriously about the future.

[-][anonymous]00

Great post, and expresses precisely what I think about the whole issue.