shokwave comments on On the unpopularity of cryonics: life sucks, but at least then you die - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (465)
That is an interesting and concerning view. Cryonics makes the usual argument:
And the average person does not agree with the conclusion. They might not be consciously aware of why they don't want to live forever, but they damn well know that idea doesn't appeal to them. The cryonics advocate presses them for a reason, and the average person unknowingly rationalises when they give their reason - they refuse the second premise on some grounds - scam, won't work, evil future empire, whatever. The cryonics advocate resolves that concern, demonstrates that cryonics does have a chance of working, and the person continues to refuse.
Cryonics advocate checks if they refuse premise 1 - person emphatically responds that they love life not because they actually do, but because it is a huge status hit / social faux pas / Bad Thing (tm) to admit they don't. Actually, their life sucks, and dragging it out forever will make it worse, but they can't say this out loud - they probably can't even think it to themselves.
Wow. It's kinda scary to think that people refusing cryonics is a case of revealed preferences, and that revealed preference is that they don't like life. Actually, it might not be scary, it might just be against social norms. But I'd like to think I genuinely like life and want life to be worth living for everyone. Of course, I'd say that if it was a social norm to say that. Damn.
Probably false.
People don't find flimsy excuses to refuse conventional life-saving treatments, and non-conventional treatments can become conventional (say, antibiotics). This holds, though less so, even if the treatments cost quality of life and money.
I didn't start out liking life, but I seem to be very atypical in that regard (often suffer from anhedonia, for example). But it's more likely that I've moved away from the norm, not toward it, especially since I'm bad at distinguishing norms for X from norms for "X"... shudder
Scary. Someone please disprove this.
Growing up religious I assumed I'd have a second different (not necessarily better), chance at life, that wouldn't have an expiration date. As I grew up I saw the possibility grew more distant and less probable in my mind.
I still feel entitled to at least get a try at a second one. Also for the past few years I generally feel much of the things I vaule will be lost and destroyed and that they are probably objectively out of my reach to try and save. So perhaps a touch of megalomania also plays a role or maybe I just want to be the guy to scream:
"YOU MANIACS! YOU BLEW IT UP! OH, DAMN YOU! GODDAMN YOU ALL TO HELL!"
That logic only holds if there's no cost, or no alternate investment. Currently the cost of cryonics is ~$28,000. If I donated that to GiveWell instead, I'd be saving ~28 lives. The question of whether I want to be immortal or save 28 mortal lives, is not one I've seen much addressed, and not one that I've yet found a satisfying answer to.
I've given it a lot of thought, and this does appear to be my True Rejection of Cryonics; if I can find a satisfying reasoning to value my immortality over those 28 mortal lives, I'd sign up.
I find the answer "be immortal" satisfying, personally. Your mileage may vary.
May I ask what reasoning/evidence lead you to that conclusion? I'm sort of viewing it as a trolley problem: I can either kill my immortal self, or I can terminate 28 other lives that much sooner than they would have.
(I'm also realizing my conclusion is probably "I don't do THAT much charitable to begin with, so let's just go ahead and sign up, and we can re-route the insurance payoff if we suddenly become more philanthropic in the future")
Evidence is a wrong question, and reasoning not much better. Unless, of course, you mean "evidence and reasoning about my own arbitrary preferences". In which case my personal testimony is strong evidence and even stronger for me given that I know I am not lying.
I prefer immortality over saving 28 lives immediately. I also like the colour "blue".
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don't have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It's a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying "I'd willingly give up immortality to prevent 28 strangers from starving" then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of "what do I want?", eliminating distracting thoughts about how things 'should' be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is "what if an FAI or Omega told you that your actual extrapolated preference was X?". If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)
Look at it in terms of years gained instead of lives lost.
Saving 28 lives gives them each 50 years at best until they die, assuming none of them gain immortality. That's 1400 man-years gained. Granting immortality to one person is infinity years (in theory); if you live longer than 1400 years then you've done the morally right thing by betting on yourself.
Additionally, money spent on cryonics isn't thrown into a hole. A significant portion is spent on making cryonics more effective and cheaper for others to buy. Rich Americans have to buy it while it's expensive as much as possible, so that those 28 unfortunates can ever have a chance at immortality.
The game theory makes it non-obvious. Consider the benefits of living in a society where people are discouraged from doing this kind of abstract consequentialist reasoning.
Have you spent $28,000 on nonessentials for yourself over the course of your life? Most people can easily hit that amount by having a nicer car and house/apartment than they "need". If so then by revealed preference, you value those nonessentials over 28 statistical lives; do you also value them over a shot at immortality?
Getting seriously sick of hearing "VillageReach beats cryonics" from people who don't also say "VillageReach beats movies, cars, and dentists. spits out rotten teeth". We do have a few heroes like that here (Rain and juliawise), but if you are not one quit it already.
That would be stupid. If I produce, say, $5,000/year for charity, and a dentist adds even a year of productive life to me, then it's worth $5,000 to go see that dentist. At worst I break even.
I don't have a car, but for most people a car probably allows them to get to their job to begin with, so that's $50K+/year in income, vs a $10K used car every few years. Again, you'd have to be really stupid not to think this is a smart investment. A rational person should optimize by getting a high paying job and donating that income to charity, not by skipping the car and working at whatever happens to be otherwise reachable.
Movies? Well, I'm an emotional being. This is the place where we do get in to personalities, but for me, personally, if I'm unhappy, my productivity drops. Going to a movie refreshes my productivity. I do better work, don't get fired, and might even make a raise. So for me, personally, it still works out. It's not like I'm spending $1,000/month on these things.
And, all that aside, just because I'm not a perfect philanthropist doesn't mean I should automatically default to cryonics. Maybe I should self-modify to sign up for cryonics, or maybe I should self-modify to be more like Rain and juliawise. It's important to ask questions and try and determine an actual answer to that. It's easy to push for cryonics when you genuinely ignore the opportunity costs, but for those of us actually stopping to consider them, a response of "shut up, you're no Rain" is really, amazingly unhelpful.
Given that there are 2000 people in the world signed up for cryonics, I think there's a lot more people who have open objections to it, too. If our community's response to "But what about VillageReach?" is really "Oh, like you're so selfless", we are going to lose. Rationalists ought to win.
Even if we ignore the practicalities, even if we ignore my personal situation, it's still a damned useful question if we actually care about the rest of the world. And if you want cryonics to be mainstream like Eliezer seems to hope for, you have to actually care about the mainstream.
So, if all you have is a witty ad hominen attack about how I'm not truly selfless, kindly quit already.
Anger seems to be existing so to get the emotional level out of the way: I'm not attacking you. I think you're cool and I like you. I'm not accusing you of not being a perfect philanthropist, or saying that if you're not one then you deserve blame.
I admit the argument is personality-dependent in an ad-hominem-ish way, but since I got upvoted I think I'm not exclusively being an asshole here. It goes like this: If you're the kind of person who usually takes altruistic opportunity costs into account, then it makes perfect sense that you'd care about that of cryonics. If you're not, then it's more likely than you're saying "VillageReach beats cryonics", not because you tried to evaluate it and thought of altruistic opportunity costs, but because you rejected it for other reasons, then looked for plausible rejections and hit on altruistic opportunity costs.
Would a perfect philanthropist see a dentist, drive a car, and watch movies? Yes, probably and maybe. But the algorithms that Rain and MixedNuts use to decide to watch a movie are completely different, even if they both return "yes". Rain asks "Will this help me make and donate enough money to offset the costs, and are there any better alternatives to make me relaxed and happy and generally productive?". MixedNuts asks "Is this nifty, and will movie geeks like me better if I watch it?". I can claim that watching movies makes me more productive, and it'll probably be true; but still as a matter of fact it's not what made me decide.
Is it possible that a perfect philanthropist would buy shiny stuff and expensive end-of-life treatments but not sign up for cryonics? Yes. For example, they could have tiny conformity demons in their brain that make them have to do what society likes (either by addiction-like mechanisms or by nuking their productivity if they don't). Since cryonics is weird, the conformity demons don't demand it, so the money it would have cost can go to charity. But that's still a different state of mind from obeying the conformity demons without knowing it.
Conversely, there are possible states where you don't usually care about altruistic opportunity costs, but start doing so for cryonics for strange reasons. But it's still an unusual state of mind, and if you don't say why you're in it it's going to prompt doubt about whether it's your true rejection.
Also, the reason I was a snappy jerk is that I've heard the argument a lot before. Standard arguments happen over and over and over (I should know, I read atheist blogs), and you've got to be willing to have them many times if you want an idea to spread; but I'd prefer Less Wrong to address the question once and move on, with the standard debate rehappening elsewhere.
I'm not sure what your argument about the mainstream is. Is it "Lots of people have this objection a lot; they wouldn't if it sucked", or is it "Yeah, this objection sucks, but boy do you ever need a reply that doesn't make you sound like a complete asshole"?
Thank you for the calm, insightful response :)
If someone had linked me to a "one and done" article, I'd feel a lot more confident that this is a standard argument with a good/interesting answer. Instead I mostly got responses that seemed to work out to "I'm not a terribly nice person so it was simple for me" and "you're not a terribly nice person so it should be simple for you".
If there is a "one and done" you want to link me to, I wouldn't object at all. I've read most of LessWrong, but not much else out there. I don't think I've seen this specific objection addressed before.
My mind seems to be weird in a lot of ways. For cryonics, it seems to come down to: cryonics is a far-off future thing, therefore my Planning mode gets engaged. Planning mode goes "I have more money than I need to survive. Why am I being selfish and not donating this?"
I'm not real inclined to view this as problematic, because on a certain level charity does feel good, and I like making the world a better place. On the other hand, I also grew up with a lot of bad spending habits, so my short-term thinking is very much "ooh, shiny thing, mine now".
I will say that the idea of a $28,000 operation that gives me six more months in a hospice really bothers me - it's a horrifically irrational or selfish thing to think I'm worth that much. If push came to shove, I'm not sure I'd have the courage and energy to refuse social norms and pressure, but the idea bothers me.
Eliezer raises a good point, that one can do both, but it implies a certain degree of financial privilege. Thus, there's still the open question of priorities. While psychologically we have "different budgets" for different things, all of those do fundamentally come out of one big budget.
When people say "I'd only accept that argument from Rain", it makes me wonder if I should be pursuing cryonics or being more like Rain. It's only very recently that I've had much of any financial flexibility in my life, so I'm trying to figure out what to do with it. I'm trying to figure out whether I want to become the sort of person who is signed up for cryonics, or the sort of person who funnels that extra money in to charity.
If you are currently donating everything you practically can to charity, fair enough, don't sign up for cryonics.
If you think you should but haven't yet, then sign up for cryonics first. As a person with one foot in the future, you're more likely to do what the future will most benefit from. As someone who avoids thoughtful spending because you feel like you should spend it on charity, you'll end up at XKCD 871.
Cryonics only makes the difference between your seeing the future and your not seeing the future if 1) sufficiently high tech eventually gets developed by human-friendly actors, 2) it happens only after you die, 3) cryonics works, 4) nothing else goes wrong or makes cryonics irrelevant. For the median LessWronger, I would put maybe a 10% probability on the first two combined and maybe at most a 50% probability on the last two combined. So maybe at best I'd say something like cryonics gives you two and a half toes in a future where you used to have two toes.
I mean "one foot in the future" to refer to your resulting psychological state, not to a fact related to your likely personal future. I think it's pretty unlikely I'll be suspended and reanimated - many other fates are more likely, including never being declared dead. But I think signing up is a move towards a different attitude to the future.
Is this just a plausible guess, or do we have other evidence that it's true, e.g. people spontaneously citing being signed up for cryonics as causing them to feel the future is real enough to help optimally philanthropize into existence?
If there were a one-and-done answer, I think this'd be it.
(I just love that I can de-escalate drama on LW. This site rocks.)
I'll concede that the previous discussions were insufficient. Let's make this place the "one and done" thread.
Do you accept that singling out cryonics is rather unfair, not as opposed to all spending, but as opposed to other Far expenses? To do this right we have to look at "How heroic should my sacrifices be?" in general; if we conclude cryonics is not worth the cost in circumstances X we should conclude the same thing about, say, end-of-life treatments.
I've tried to capture my intuitions about sacrificing a life to save several; here are the criteria that seem relevant:
Note knock-on effects: If someone hears of the Resistance, and is inspired to give their life to a cause, I'm happy. (If the cause is Al-Qaeda, they've made a mistake, but an unrelated one.) If someone hears of people practicing Really Extreme Altruism and are driven to suicide as a result, I'm sad. Refusing cryonics strikes me as closer to the latter.
That's why I brush and floss every night, and see the dentist every 6 months. Gum disease is linked with heart disease, and damaged teeth create pain. I like to be comfortable.
Though I perform routine maintenance on my life, I try to reduce the cost as much as possible, and when I spend money, I recognize and acknowledge the tradeoffs. It's a simple exercise to create a graph of benefit from lowest to highest, and start plotting things. This makes it easier to remember there are more alternatives.
I just really really dislike the idea of dying. Singing up for cryonics refreshes my productivity.
Heh, I never thought of it that way. Neat :)
Rephrasing it as my favorite argument...
"Hey, what's that dorky necklace you're wearing?"
Oh, this? Well, you see, it turned out I was born with a fatal disease, and this is my best shot at overcoming it.
"That necklace will arrest the progress of a fatal disease?"
Yes, definitely, if a few plausible assumptions turn out right.
"How much did the necklace cost?"
Oh, about $28,000.
"And what disease is this that you can somehow fight with a $28,000 necklace?"
Mortality.
"But ... but ... that's not a disease!!!"
Looks like someone gets tripped up by definitions a little too easily...
Your line "Yes, definitely, if a few plausible assumptions turn out right. " is where most people will be put off.
It strikes of dishonesty, presumably to yourself. You're saying "definitely" and then clarifying that's it not actually definite. Which indicates that you're not being honest, you're trying to give an incorrect impression. At which point, your idea of what is plausible becomes entirely untrustworthy.
Which for a person desperate to find a way to overcome a fatal disease is commonplace.
I agree with what you say, but the rest of the discussion could go essentially unchanged if the line
were replaced with
"Perhaps, my best estimate of the odds are 1% or so"
(which would be my response in an analogous discussion)
I think that what seems to me to be the main point of the dialog,
is fairly insensitive to a wide range of possible odds for cryonics working.
XKCD 871: The problem of scaling the sane use of money is a problem of not crushing people's wills, not a problem of money being a limited resource. It simply isn't true that money spent on cryonics comes out of Givewell's or SIAI's pockets, unless you're Rain, which is why I'll accept that answer from Rain but not from you.
You have not considered this thoroughly.
What are 28 mortal lives for one that is immortal? If I was asked to choose between the life of some being that shall live for thousands of years or the lives of thirty something people who shall live perhaps 60 or 70 years, counting the happy productive hours of life seems to favour the long lived. Of course they technically also have a tiny chance of living that long, but honestly what are the odds that absent any additional investment (which will have the opportunity cost of other short lived people), they have of matching the mentioned being's longevity?
Now suppose I could be relatively sure that the long lived entity would work towards making the universe, as much as possible, a place that in which I, as I am today, could find some value in, but of those thirty something individuals I would know little except that they are likley to be at the very best, at about the human average when it comes to this task.
What is the difference between a certainty of a two thousand year lifespan, or the 10% chance of a 20 000 year one? Or even a 0.5% chance of a 400 000 year life span? Perhaps the being can not psychologically handle living that much longer, but having assurances that it would do its best to self-modify so it could dosen't seem unreasonable.
Why should I then privilege the 28 because the potentially long lived being just happens to be me?
Only I can live forever. - is a powerful ethical argument if there is a slim but realistic chance of you actually achieving this.
Genuine question: would you push a big red button that killed 28 African children via malaria, if it meant you got free cryonic suspension? I'm fine with a brutal "shut up and multiply" answer, I'm just not sure if you really mean it when you say you'd trade 28 mortal lives for a single immortal one.
Ha ha ha. I find it amusing that you should ask me of all people about this. I'd push a big red button killing through neglect 28 cute Romanian orphans if it meant a 1% or 0.5% or even 0.3% chance of revival in an age that has defeated ageing. It would free up my funds to either fund more research, or offer to donate the money to cryopreserve a famous individual (offering it to lots of them, one is bound to accept, and him accepting would be a publicity boost) or perhaps just the raw materials for another horcrux.
Also why employ children in the example? Speaking of adults the idea seemed fine, children should probably be less of a problem since they aren't fully persons in exactly the same measure adults are no? It seems so attractive to argue to argue that killing a child costs the world more potential happy productive man years, yet have you noted that in many societies the average expected life span is so very low mostly because of the high child mortality? A 20 year old man in such a society has already passed a "great filter" so to speak. This is probably true in many states in Africa. And since we are on the subject...
There are more malnourished people in India than in all of sub-Saharan Africa, yet people always invoke an African example when wishing to "fight hunger". This is true of say efforts to eradicate malaria or making AIDS drugs affordable or "fighting poverty" or education intiatives, ect. I wonder why? Are they more photogenic?Does helping Africans somehow signal more altruism than helping say Cambodians? I wonder.
At least in the IT and call centre industries in the United States, "India" is synonymous with "cheap outsourcing bastards who are stealing our jobs." Quite a few customers are actively hostile towards India because they "don't speak English", "don't understand anything", and are "cheap outsourcing bastards who are stealing proper American jobs".
I absolutely hate this idiocy, but it's a pretty compelling case not to try and use India as an emotional hook...
I'd also assume that people are primed to the idea of "Africa = poor helpless children", so Africa is a much easier emotional hook.
It seems Lucid fox has a point. LW isn't that heavily dominated by US based users, also dosen't it seem wise for LW users to try and avoid such uses when thinking of difficult problems of ethics or instrumental rationality?
No, but if my example is going to evoke the opposite response in 10-20% of my audience, it's probably a bad choice :)
Conceeded. I was interested in gauging emotional response, though, not an intellectual "shut up and multiply". The question is less one of math and more one of priorities, for me.
Taken at face value, the comments above are those of a sociopath. This is so not because this individual is willing to sacrifice others in exchange for improved odds of his own survival (all of us do that every day, just by living as well as we do in the Developed World), but because he revels in it. It is even more ominous that he sees such choices as being inevitable, presumably enduring, and worst of all, desirable or just. Just as worrisome is the lack of response to this pathology on this forum, so far.
The death and destruction of other human beings is a great evil and a profound injustice. It is also extremely costly to those who survive, because in the deaths of others we lose irreplaceable experience, the opportunity to learn and grow ourselves, and not infrequently, invaluable wisdom. Even the deaths of our enemies diminishes us, if for no other reason than that they will not live long enough to see that they were wrong, and we were right.
Such a mind that wrote the words above is of a cruel and dangerous kind, because it either fails, or is incapable of grasping the value that interaction and cooperation with others offers. It is a mind that is willing to kill children or adults it doesn't know, and is unlikely to know in a short and finite lifetime, because it does not understand that much, if not almost all of the growth and pleasure we have in life is a product of interacting with people other than ourselves, most of whom, if we are still young, we have not yet met. Such a mind is a small and fearful thing, because it cannot envision that 10, 20, 30, or 500 years hence, it may be the wisdom, the comfort, the ideas, or the very touch of a Romanian orphan or of a starving sub-Saharan African “child” from whom we derive great value, and perhaps even our own survival. One of the easiest and most effective ways to drive a man mad, and to completely break his will, is to isolate him from all contact with others. Not from contact with high intellects, saintly minds, or oracles of wisdom, but from simple human contact. Even the sociopath finds that absolutely intolerable, albeit for very different reasons than the sane man.
Cryonics has a blighted history of not just attracting a disproportionate number of sociopaths (psychopaths), but of tolerating their presence and even of providing them with succor. This has arguably has been as costly to cryonics in terms of its internal health, and thus its growth and acceptance, as any external forces which have been put forward as thwarting it. Robert Nelson was the first high profile sociopath of this kind in cryonics, and his legacy was highly visible: Chatsworth and the loss of all of the Cryonics Society of California's patients. Regrettably, there have been many others since.
It is a beauty of the Internet that it allows to be seen what even the most sophisticated psychological testing can often not reveal: the face of the florid sociopath. Or perhaps, in this case I should say, the name of same, because putting a face to that name is another matter altogether.
I imagine that's the point of writing under a Voldemort persona.
A Dark Lord, no less!
Details?
I've seen a couple of cases of people disliking cryonics because they see its proponents as lacking sufficient gusto for life, but no cases of disliking or opposing cryonics because there are too many sociopaths associated with it.
For what it's worth, LessWrong has done a pretty good job of firming up exactly that perspective for me.
In fairness, I don't mind psychopathic behavior, and I'm still signing up. I've definitely developed a much lower opinion of cryonics advocacy since being here, though.
I'm curious as to what brought you to these conclusions. Can you explain further?
Well, that line captures a lot of it.
Eliezer's response was to link me to an XKCD comic.
So, thus far, the quality of discourse here has been sociopathic fictional characters and webcomics...
Can you expand on that claim? I find this claim to be very shocking.
http://lesswrong.com/lw/6vq/on_the_unpopularity_of_cryonics_life_sucks_but_at/4ozz I'll go ahead and keep this to one thread for my own sanity :)
To be absolutely clear, the commenter you are responding to is a troll and a fictional character.
I'm curious as to how you know "Voldemort" is a troll?
LW has a few role-playing characters identifiable by usernames, while others don't appear to be playing such games and don't use speaking usernames. So "Voldemort" is likely a fictional persona tailored to the name, rather than a handle chosen to describe a real person's character.
Correct, though I prefer to think of it as using another man's head to run a viable enough version of me so that I may participate in the rationalist discourse here.
Who are the other role-playing characters on LessWrong?
True evil geniuses don't reveal their intentions openly. (They also don't post this blog comment.)
That's what you'd like us to think.
LOL! You don't have to be a genius to be evil and, speaking from long, hard and repeated experience, you don't have to be a genius to a great deal of harm - just being evil is plenty sufficient. This is especially true when the person who has ill intentions also has disproportionately greater knowledge than you do, or than you can easily get access to in the required time frame. The classic example has been the used car salesman. But better examples are probably the kinds of situations we all encounter from time to time when we get taken advantage of.
I don't know much about computers, so I necessarily rely on others. In an ideal world, I could take all the time necessary to make sure that the guy who is selling me hardware or software that I urgently need is giving me good advice and giving me the product that he says he is. But we don't live in an ideal world. Many people have this kind of problem with medical treatment choices, and for the same reasons. Another, related kind of situation, is where the elapsed time between the time you contract for a service and the time you get it is very long. Insurance and pension funds are examples. Lots of mischief there, and thus lots of regulation. It doesn't take evil geniuses in such situations to cause a lot of loss and harm.
And finally, while this may seem incredible, in my experience those few people who are both geniuses and evil, usually tell you exactly what they are about. They may not say, "I intend to torture and kill you," but they very often will tell you with relish how they've tortured others, or about how they are willing to to torture and kill others. The problem for me for way too long was not taking such people seriously. Turns out, they usually are serious; deadly serious.
Voldemort is the taken name of the main antagonist of the popular fantasy book series Harry Potter.
Eliezer Yudkowsky, one of the founders and main writers for lesswrong.com, also writes a Harry Potter fanfiction, called Harry Potter and the Methods of Rationality. (HPATMOR)
Because of this, several accounts on this forum are references to Harry Potter characters.
[edit] Vol de mort is also french for Flight of Death.
I feel obligated to point out that one of the links at the end of the OP was a link to Darwin's review of the last Harry Potter movie; he knows who Voldemort the character is.
I hate to repeat myself but let me ease your mind.
Despite the risk of cluttering I even made a posts who's only function was to clear up ambiguity:
I thought it was more than probable the vast majority of readers here would be familiar with me. Perhaps I expect too much of them. I do that sometimes expect too much of people, it is arguably one of my great flaws.
When you say: "I thought it was more than probable the vast majority of readers here would be familiar with me," you imply a static readership for this list serve, or at least a monotonic one. I don't think either of those things would be good for this, or most other list serves with an agenda to change minds. New people will frequently be coming into the community and their very diversity may be one of their greatest values.
Nelson has also managed to get director Errol Morris to make a movie based on his version of cryonics history, which suggests that he may have the last word on his reputation, depending on how the film portrays him.
The ugly truth is that sometimes sociopaths are useful, though you are probably correct in stating that visible and prominent sociopaths that support cryonics hurt it.
(nods) Absolutely.
Unfortunately, I came installed with a fairly broken evaluator of chances, which tends to consistently evaluate the probability of X happening to person P differently if P = me than if it isn't, all else being equal... and it's frequently true that my evaluations with respect to other people are more accurate than those with respect to me.
So I consider judgments that depend on my evaluations of the likelihood (or likely consequences) of something happening to me vs. other people suspect, because applying them depends on data that I know are suspect (even by comparison to my other judgments).
But, sure, that consideration ought not apply to someone sufficiently rational that they judge themselves no less accurately than they judge others.
Then work towards the immortality of another. Dedicate your life to it.
That points out that people who think cryonics might work but forgo it because of the uncertainty of being bias towards themselves seldom consider committing to not get it for themselves yet provide it for another and then considering the issue while at the same time being a discreet call to join the Death Eaters.
I can't help myself but upvote it.
(nods) Yup, that makes more sense.
Ah, even muggles can be sensible occasionally.
And a good thing too, since we're all we've got.
If you donated that to VillageReach, you'd be saving about 28 lives. If you donated that to GiveWell, you'd help them to find other charities that are similarly effective.
Apologies if I was unclear: For "GiveWell", please read "The charity most recommended by GiveWell right now, because VillageReach will probably eventually reach saturation and become non-ideal".
That's an interesting point. I am signed up for cryonics, but I'm actually rather ambivalent about my life. One major wrinkle is that, if cryonics does succeed, it would almost certainly have to be in a scenario where aging was solved by necessary precursor technologies. For me, a large chunk of my ambivalence is simply the anticipated decline in health as I age. By the same token, existential risks that might prevent me from, for instance, living from age 75 to age 85 tend not to worry me much.
It could also be a revealed preference that they don't like life enough to give their fate completely into the hands of unknown future people, or simply that they don't think the probability of successful cryonics + a good future is high enough to justify the costs.
Actually, when you put the argument for cryonics like this, it kind of sounds like a version of Pascal's Mugging. Perhaps we could call this: Pascal's Benefactor.
It's just Pascal's regular Wager.
Edit: I mean, this presentation makes it look like Pascal's Wager. Cryonics is too high-probability to actually be Pascal's Wager.
As MixedNuts pointed out, it's Pascal's Wager - yet you have a point. Putting the argument like this might cause the Pascal's Wager Fallacy Fallacy (which is still one of my favourite posts on this site).
Hm! Someone I know wants to write a post called "Pascal's Wager Fallacy Fallacy Fallacy", because (the claim is) that post doesn't correctly analyze the relevant social psychology involved when someone is afraid of being seen to commit to a very-possibly-indefensible-in-retrospect position where they predict they'll be seen as to-the-other-person-unjustifiably having chosen a predictably immoral or stupid course of action, or something like that.
See this comment. (Disclaimer 1: it's mine. Disclaimer 2: my objection isn't really about the social psychology involved -- but I think that gives it more right to use the word "fallacy".)
Then it would make sense to call it "Not-taking-social-costs-into-consideration Fallacy" but not "Pascal's Wager Fallacy Fallacy Fallacy". That post wasn't really about the feasibility of cryonics, it only made claims about the logical validity of comparing the reasoning behind cryonics to Pascal's Wager and that's not something that can be affected by social psychology.