Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on A Much Better Life? - Less Wrong

62 Post author: Psychohistorian 03 February 2010 08:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (173)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 05 February 2010 08:23:15PM 16 points [-]

Frankly, you don't strike me as genuinely open to persuasion, but for the sake of any future readers I'll note the following:

1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.

2) If this were not the case, I'd be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don't want people to die. They are concerned with choosing successors with the same attitude, first because they don't want people to die, and second because they expect their own revivals to be in their hands someday.

Comment author: shiftedShapes 05 February 2010 10:39:38PM *  1 point [-]

So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI).

Well that is a personal judgment call for you to make.

You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.

Comment author: byrnema 06 February 2010 12:16:32AM *  5 points [-]

If you read through Alcor's website, you'll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees -- this points to good intentions.

How convinced do you need to be to pay $25 a month? (I'm using the $300/year quote.)

If you die soon, you won't have paid so much. If you don't die soon, you can consider that you're locking into a cheaper price for an option that might get more expensive once the science/culture is more established.

In 15 years, they might discover something that makes cryonics unlikely -- and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were 'cryonically covered' the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch.

If you like the idea of it -- and you seem to -- why wouldn't you participate in this early stage even when things are uncertain?

Comment author: shiftedShapes 08 February 2010 04:47:09AM -2 points [-]

I need to be convinced that cryonics is better than nothing, and quite frankly I'm not.

For now I will stick to maintaining my good health through proven methods, maximizing my chances to live to see future advances in medicine. That seems to be the highest probability method of living practically forever, right? (and no I'm not trying to create a false-dilemma here, I know I could do both).

Comment author: komponisto 08 February 2010 05:14:28AM 1 point [-]

If cryonics were free and somebody else did all the work, I'm assuming you wouldn't object to being signed up. So how cheap (in terms of both effort and money) would cryonics have to be in order to make it worthwhile for you?

Comment author: shiftedShapes 08 February 2010 09:27:46PM -1 points [-]

yeah for free would be fine.

at the level of confidence I have in it now I would not contribute any money, maybe $10 annual donation because i think it is a good cause.

If I was very rich I might contribute a large amount of money to cryonics research although I think I would rather spend on AGI or nanotech basic science.

Comment author: wedrifid 07 February 2010 01:29:09AM *  18 points [-]

Rest In Peace

1988 - 2016

He died signalling his cynical worldliness and sophistication to his peers.

Comment author: Eliezer_Yudkowsky 07 February 2010 02:52:26AM 6 points [-]

It's at times like this that I wish Less Wrong gave out a limited number of Mega Upvotes so I could upvote this 10 points instead of just 1.

Comment author: Will_Newsome 31 July 2011 06:36:36PM 3 points [-]

It'd be best if names were attached to these hypothetical Mega Upvotes. You don't normally want people to see your voting patterns, but if you're upsetting the comment karma balance that much then it'd be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I'm too lazy to list out but generally they somewhat favor having names attached.

Comment author: Will_Newsome 31 July 2011 06:02:38PM *  -1 points [-]

I have a rather straightforward argument---well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am---anyway, I have an argument that there is a strong possibility, let's call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn't developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly "counterfactual" quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn't have to be much of any local information around come FOOM.

Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn't necessitate aid from quantum siblings

You've done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with 'lots' of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole 'taking transhumanism/singularitarianism seriously' clique?

Comment author: Zack_M_Davis 31 July 2011 06:13:37PM 6 points [-]

I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]

And that argument is ... ?

Comment author: [deleted] 31 July 2011 06:20:05PM 2 points [-]

How foolish of you to ask. You're supposed to revise your probability simply based on Will's claim that he has an argument. That is how rational agreement works.

Comment author: Will_Newsome 31 July 2011 06:26:39PM *  3 points [-]

Actually, rational agreement for humans involves betting. I'd like to find a way to bet on this one. AI-box style.