Life Extension versus Replacement

13 Julia_Galef 30 November 2011 01:47AM

Has anyone here ever addressed the question of why we should prefer

(1) Life Extension: Extend the life of an existing person 100 years
to
(2) Replacement: Create a new person who will live for 100 years?


I've seen some discussion of how the utility of potential people fits into a utilitarian calculus. Eliezer has raised the Repugnant Conclusion, in which 1,000,000 people who each have 1 util is preferable to 1,000 people who each have 100 utils. He rejected it, he said, because he's an average utilitarian.

Fine. But in my thought experiment, average utility remains unchanged. So an average utilitarian should be indifferent between Life Extension and Replacement, right? Or is the harm done by depriving an existing person of life greater in magnitude than the benefit of creating a new life of equivalent utility? If so, why?

Or is the transhumanist indifferent between Life Extension and Replacement, but feels that his efforts towards radical life extension have a much greater expected value than trying to increase the birth rate?

 

(EDITED to make the thought experiment cleaner. Originally the options were: (1) Life Extension: Extend the life of an existing person for 800 years, and (2) Replacement: Create 10 new people who will each live for 80 years. But that version didn't maintain equal average utility.)


*Optional addendum: Gustaf Arrhenius is a philosopher who has written a lot about this subject; I found him via this comment by utilitymonster. Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension. Essentially, we begin by comparing potential outcomes according to overall utility, as usual, but we then penalize outcomes if they make any existing people worse off.

So even though the overall utility of Life Extension is the same as Replacement, the latter is worse, because the existing person is worse off than he would have been in Life Extension. By contrast, the potential new person is not worse off in Life Extension, because in that scenario he doesn't exist, and non-existent people can't be harmed. Arrhenius goes through a whole list of problems with this moral theory, however, and by the end of the paper we aren't left with anything workable that would prioritize Life Extension over Replacement.

 

Comment author: Vladimir_Nesov 29 November 2011 06:51:46AM *  0 points [-]

Rationality alerts you when you have a false belief that’s making you worse off.

The framing of this section (not just the title) makes it look like an invitation to motivated skepticism, which is generally a bad idea.

Comment author: Julia_Galef 29 November 2011 07:03:17AM 1 point [-]

My framing was meant to be encouraging you to disproportionately question beliefs which, if false, make you worse off. But motivated skepticism is disproportionately questioning beliefs that you want to be false. That's an important difference, I think.

Are you claiming that my version is also a form of motivated skepticism (perhaps a weaker form)? Or do you think my version's fine, but that I need to make it clearer in the text how what I'm encouraging is different from motivated skepticism?

How rationality can make your life more awesome

23 Julia_Galef 29 November 2011 01:23AM

I'm currently working with Lukeprog on a crash course in rationality. It's essentially a streamlined version of the Sequences, but one area we want to beef up is the answer to the question, "Why learn about rationality?"

I've gone through all of the previous threads I can find on this topic -- Reflections on rationality a year out, Personal benefits from rationality, What has rationality done for you?, and The benefits of rationality -- but most of the examples people give of rationality helping them are a little too general. People cite things like "I hold off on proposing solutions," or "I ask myself if there's a better way to be doing this."

To someone who's not already sold on this whole rationality thing, general statements like that won't mean very much. What I think we really need is a list of concrete examples of how the tools of epistemic rationality, as they're taught in the Sequences, can improve your health, your career, your love life, the causes you care about, your psychological well-being, and so on.

Below, my first attempt at doing just that. (I explain what rationality is, and how to practice it, elsewhere in the guide -- this section is just about benefits.) I'd appreciate feedback: Is it clear? Can you think of any other good examples in this vein? Would it be convincing to someone who isn't intrinsically interested in epistemic rationality for its own sake?

 

...

For some people, rationality is an end in itself – they value having true beliefs. But rationality’s also a powerful tool for achieving pretty much anything else you care about. Below, a survey of some of the ways that rationality can make your life more awesome:

Rationality alerts you when you have a false belief that’s making you worse off.

You’ve undoubtedly got beliefs about yourself – about what kind of job would be fulfilling for you, for example, or about what kind of person would be a good match for you. You’ve also got beliefs about the world – say, about what it’s like to be rich, or about “what men want” or “what women want.” And you’ve probably internalized some fundamental maxims, such as: When it’s true love, you’ll know. You should always follow your dreams. Natural things are better. Promiscuity reduces your worth as a person.

Those beliefs shape your decisions about your career, what to do when you’re sick, what kind of people you decide to pursue romantically and how you pursue them, how much effort you should be putting into making yourself richer, or more attractive, or more skilled (and skilled in what?), more accommodating, more aggressive, and so on.

But where did these beliefs come from? The startling truth is that many of our beliefs became lodged in our psyches rather haphazardly. We’ve read them, or heard them, or picked them up from books or TV or movies, or perhaps we generalized from one or two real-life examples.

Rationality trains you to notice your beliefs, many of which you may not even be consciously aware of, and ask yourself: where did those beliefs come from, and do I have good reason to believe they’re accurate? How would I know if  they’re false? Have I considered any other, alternative hypotheses?

Rationality helps you get the information you need.

Sometimes you need to figure out the answer to a question in order to make an important decision about, say, your health, or your career, or the causes that matter to you. Studying rationality reveals that some ways of investigating those questions are much more likely to yield the truth than others. Just a few examples:

“How should I run my business?” If you’re looking to launch or manage a company, you’ll have a huge leg up over your competition if you’re able to rationally determine how well your product works, or whether it meets a need, or what marketing strategies are effective.

“What career should I go into?” Before committing yourself to a career path, you’ll probably want to learn about the experiences of people working in that field. But a rationalist also knows to ask herself, “Is my sample biased?” If you’re focused on a few famous success stories from the field, that doesn’t tell you very much about what a typical job is like, or what your odds are of making it in that field.

It’s also an unfortunate truth that not every field uses reliable methods, and so not every field produces true or useful work. If that matters to you, you’ll need the tools of rationality to evaluate the fields you’re considering working in. Fields whose methods are controversial include psychotherapy, nutrition science, economics, sociology, management consulting, string theory, and alternative medicine.

“How can I help the world?” Many people invest huge amounts of money, time, and effort in causes they care about. But if you want to ensure that your investment makes a difference, you need to be able to evaluate the relevant evidence. How serious of a problem is, say, climate change, or animal welfare, or globalization? How effective is lobbying, or marching, or boycotting? How far do your contributions go at charity X versus charity Y?

Rationality teaches you how to evaluate advice.

Learning about rationality, and how widespread irrationality is, sparks an important realization: You can’t assume other people have good reasons for the things they believe. And that means you need to know how to evaluate other people’s opinions, not just based on how plausible their opinions seem, but based on the reliability of the methods they used to form those opinions.

So when you get business advice, you need to ask yourself: What evidence does she have for that advice, and are her circumstances relevant enough to mine? The same is true when a friend swears by some particular remedy for acne, or migraines, or cancer. Is he repeating a recommendation made by multiple doctors? Or did he try it once and get better? What kind of evidence is reliable?

In many cases, people can’t articulate exactly how they’ve arrived at a particular belief; it’s just the product of various experiences they’ve had and things they’ve heard or read. But once you’ve studied rationality, you’ll recognize the signs of people who are more likely to have accurate beliefs: People who adjust their level of confidence to the evidence for a claim; people who actually change their minds when presented with new evidence; people who seem interested in getting the right answer rather than in defending their own egos.

Rationality saves you from bad decisions. 

Knowing about the heuristics your brain uses and how they can go wrong means you can escape some very common, and often very serious, decision-making traps.

For example, people often stick with their original career path or business plan for years after the evidence has made clear that it was a mistake, because they don’t want their previous investment to be wasted. That’s thanks to the sunk cost fallacy. Relatedly, people often allow cognitive dissonance to convince them that things aren’t so bad, because the prospect of changing course is too upsetting.

And in many major life decisions, such as choosing a career, people envision one way things could play out (“I’m going to run my own lab, and live in a big city…”) – but they don’t spend much time thinking about how probable that outcome is, or what the other probable outcomes are. The narrative fallacy is that situations imagined in high detail seem more plausible, regardless of how probable they actually are.   

Rationality trains you to step back from your emotions so that they don’t cloud your judgment.

Depression, anxiety, rage, envy, and other unpleasant and self-destructive emotions tend to be fueled by what cognitive therapy calls “cognitive distortions,” irrationalities in your thinking such as jumping to conclusions based on limited evidence; focusing selectively on negatives; all-or-nothing thinking; and blaming yourself, or someone else, without reason.

Rationality breaks your habit of automatically trusting your instinctive, emotional judgments, encouraging you instead to notice the beliefs underlying your emotions and ask yourself whether those beliefs are justified.

It also trains you to notice when your beliefs about the world are being colored by what you want, or don’t want, to be true. Beliefs about your own abilities, about the motives of other people, about the likely consequences of your behavior, about what happens after you die, can be emotionally fraught. But a solid background in rationality keeps you from flinching away from the truth – about your situation, or yourself -- when learning the truth can help you change it.

 

Comment author: [deleted] 26 November 2011 09:08:55PM 3 points [-]

Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn't too far off-base.

I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.

Comment author: Julia_Galef 26 November 2011 10:36:15PM *  9 points [-]

Incidentally, the filmmaker didn't capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I've uploaded it.

The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png

My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png

*I realize now that I need this modifier, at least on Less Wrong!

Comment author: Vladimir_Nesov 26 November 2011 07:43:23AM *  7 points [-]

In the talk, rationality is positioned as something that decides which of System 1 or System 2 should be used in a particular situation. But that's straw System 2! What's actually happening is that your System 2 is smart enough to get input from System 1 and update on it appropriately (or conversely, as in taking ideas seriously on an intuitive level).

Comment author: Julia_Galef 26 November 2011 10:14:49PM *  3 points [-]

Great point, in many cases, such as when you're trying to decide what school to go to, and you make the decision deliberatively but taking into account the data from your intuitive reactions to the schools.

But in other cases, such as chess-playing, aren't you mainly just deciding based on your System 1 judgments? (Admittedly I'm no chess player; that's just my impression of how it works.)

I agree you need to use System 2 for your meta-judgment about which system to use in a particular context, but once you've made that meta-judgment, I think there are some cases in which you make the actual judgment based on System 1.

Am I correctly understanding your point?

Comment author: [deleted] 26 November 2011 07:25:16PM 2 points [-]

The most obvious pro of using "intuition" and "deductive reasoning" would be that it would be a smaller cognitive leap for the audience to follow. For example, when you say "System 1" the audience has to translate: System 1-> Intuition -> One of the 2 types of reasoning

Just saying "intuition" removes the need for this extra step.

A possible pro of using "system 1" and "system 2" is that it might allow the audience to distance themselves from any emotional reactions they might have to the ideas of intuition and deduction.

Comment author: Julia_Galef 26 November 2011 10:00:29PM 3 points [-]

Yup, I went through the same reasoning myself -- I decided on "system 1" and "system 2" for their neutral tone, and also because they're Stanovich's preferred terms.

Comment author: lessdazed 26 November 2011 08:28:02PM *  3 points [-]

Emotions are clearly necessary for forming the goals, rationality is simply lame without them.

What does this mean?

a) Emotions are logically necessary for forming goals, rational beings are incapacitated without emotions.
b) Emotions are logically necessary for forming goals, rational beings are incapacitated without goals.
c) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without emotions.
d) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without goals.
e) Emotions are necessary for forming goals among humans, rational humans are incapacitated without emotions.
f) Emotions are necessary for forming goals among humans, rational humans are incapacitated without goals.
g) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without emotions.
h) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without goals.
i) (Other.)

Comment author: Julia_Galef 26 November 2011 08:52:24PM 5 points [-]

Good question. My intended meaning was closest to (h). (Although isn't (g) pretty much equivalent?)

Comment author: Julia_Galef 01 April 2011 08:53:58PM 8 points [-]

Hey, thanks for the shoutout! @SilasBarta -- Yeah, I first encountered the mirror paradox in G&R, but I ended up explaining it differently than Drescher did, drawing on Gardner as well as some discussions with a friend, so I didn't end up quoting Drescher after all. I do like his explanation, though.

In response to Disguised Queries
Comment author: Julia_Galef 12 March 2010 08:01:31PM *  12 points [-]

This was a really clarifying post for me. I had gotten to the point of noticing that "What is X?" debates were really just debates over the definition of X, but I hadn't yet taken the next step of asking why people care about how X is defined.

I think another great example of a disguised query is the recurring debate, "Is this art?" People have really widely varying definitions of "art" (e.g., some people's definition includes "aesthetically interesting," other people's definition merely requires "conceptually interesting") -- and in one sense, once both parties explain how they use the word "art," the debate should resolve pretty quickly.

But of course, since it's a disguised query, the question "Is this art?" should really be followed up with the question "Why does it matter?" As far as I can tell, the disguised query in this case is usually "does this deserve to be taken seriously?" which can be translated in practice into, "Is this the sort of thing that deserves to be exhibited in a gallery?" And that's certainly a real, non-semantic debate. But we can have that debate without ever needing to decide whether to apply the label "art" to something -- in fact, I think the debate would be much clearer if we left the word "art" out of it altogether.

I've elaborated on this topic on Rationally Speaking: http://rationallyspeaking.blogspot.com/2010/03/is-this-art-and-why-thats-wrong.html ...and I cite this LW post. Thanks, Eliezer.

Comment author: Julia_Galef 02 January 2010 11:46:04PM 4 points [-]

Eliezer, you wrote:

But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...

I'm not so sure. There have been a number of mysteries throughout history that were resolved by science, but people didn't immediately feel as if the scientific explanation really resolved the question, even though it does to us now -- like the explanation of light as being electromagnetic waves.

I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem, or whether it indicates that I just need time to become comfortable with the explanation. For instance, it feels to me like my moral intuitions are objectively correct rules about how people should and shouldn't behave. Yet my reason tells me that they are simply emotional reactions built into my brain by some combination of biology and conditioning. I've gotten somewhat more used to that fact over time, but it certainly didn't feel at first like it successfully explained why I feel that X is "wrong" or Y is "right."

View more: Prev | Next