ChrisHallquist comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChrisHallquist 12 January 2014 08:36:55AM 9 points [-]

I've had thoughts along similar lines. But it seems like there's a "be consistent about your selfishness" principle at work here. In particular, if...

  • ...you are generally willing to spend $X / month for something that has a significant chance of bringing you a very large benefit, like saving your life...
  • ...where $X /month is the cost of being signed up for cryonics (organization membership + life insurance)...
  • ... and you think cryonics has a significant chance of working...

It seems kind of inconsistent to not be signed up for cryonics.

(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of "me".)

Comment author: RobbBB 12 January 2014 10:44:50AM *  13 points [-]

Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer '2 + 2 = 5' on a test or sometimes giving '2 + 2 = 5' and other times ' 2 + 2 = 4', the latter is probably preferable. Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.

If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it's certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.

Comment author: Kaj_Sotala 12 January 2014 12:22:23PM 6 points [-]

Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.

I endorse this summary.

Comment author: [deleted] 14 January 2014 05:43:21PM -1 points [-]

... and you think cryonics has a significant chance of working...

0.23% is not a significant chance.

Comment author: Kaj_Sotala 12 January 2014 11:56:14AM *  1 point [-]

While I don't think that there's anything wrong with preferring to be consistent about one's selfishness, I think it's just that: a preference.

The common argument seems to be that you should be consistent about your preferences because that way you'll maximize your expected utility. But that's tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I've written a partial post about this, which I can try to finish if people are interested.)

For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I'm not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as "preferences", while the egoist faction is more motivated by just doing whatever feels good at the moment and isn't that interested in the long-term consequences.

Comment author: Alejandro1 12 January 2014 05:56:39PM 7 points [-]

Another way of putting this: If you divide the things you do between "selfish" and "altruistic" things, then it seems to make sense to sign up for cryonics as an efficient part of the "selfish" component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between "Near mode decisions" and "Far mode decisions". Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.

Comment author: James_Miller 12 January 2014 08:55:00PM 2 points [-]

A huge number of people save money for a retirement that won't start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.

Comment author: Alejandro1 12 January 2014 09:41:54PM 1 point [-]

That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn't, then I suspect many of those who currently do it wouldn't do it for similar reasons to those why they don't sign up for cryonics.

Comment author: Jiro 13 January 2014 12:34:13AM *  1 point [-]

I suspect most people's reasons for not signing up for cryonics amount to "I don't think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal's Mugging." I don't see how that would apply to retirement--would people in such a society seriously think they have only a very small chance of surviving until retirement age?