HungryHobo comments on Open thread, Nov. 23 - Nov. 29, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (257)
Do transhumanist types tend to value years of life lived past however long they'd expect to live anyways linearly (I.e. if they'd pay a maximum of exactly n to live an extra year, then would they also be willing to pay a maximum of exactly 100n to live 100 extra years)?
If so, the cost effectiveness of cryonics (in terms of added life years lived) could be compared with the cost effectiveness of other implementable health interventions would-be cryonicists are on the fence on. What's the marginal disutility that a given transhumanist might get from forcing themselves to eat a bit more healthily, and how much would that extend their life expectancy by? What about for exercise? Or going to the doctor over that odd itch in their throat that they'd like to ignore just one more day?
The point I'm coming to is that if I want my friends to live longer lives (or have more QALYs, or whatever) in expectation, it's probably better for me to pester them about certain lifestyle choices and preventative interventions than it is to pester them to sign up for cryonics. (By the same token, I seem to recall that Hanson or Yudkowsky once pointed out that cryonics would be expected to add more years to ones life than an open heart surgery (?) relative to the cost, or something like that.)
The levels of uncertainty make this really hard to work with.
On the one hand perhaps it works and the person gets to live for billions of deeply fulfilling years, till the heat death of the universe experiencing 10x subjective time giving trillions of QALYs.
Or perhaps they get awoken into a world where life extension is possible but legally limited to a couple hundred years.
Or perhaps they get awoken into a world where they're considered on the same moral level as lab rats and millions of copies of their mind get to suffer in countless interesting ways.
so you end up with a very very wide range of values, negative to trillions of QALYs with no way to assign reasonable probabilities to anything in the range which makes cost effectiveness calculations a little less convincing.