jimrandomh comments on Costs to (potentially) eternal life - Less Wrong

8 Post author: bgrah449 21 January 2010 09:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread.

Comment author: jimrandomh 23 January 2010 07:54:43PM *  5 points [-]

Imagine Omega came to you and said, "Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see.

This offer could have positive expected value in terms of number of lives if, for example, you were a doctor who expected to save more than three lives during the next 100,000 years. However, no matter what any decision theory or expected utility calculation says, Omega's offer falls into several reference classes which mean it cannot be accepted without formal safeguards.

First, it involves trading for a resource (years of life) in an amount several orders of magnitude different from what we normally deal with. An entity which accepts offers in that class is likely to be a paperclipper. Second, it involves a known immoral act - killing people, as opposed to failing to save them. And third, it is so implausible that confusion, deception, brain damage or misprogramming are more likely than the offer being valid. Omega can remove statements from this last reference class in thought experiments, but no entity can do so in real life.