Comment author: ShardPhoenix 30 November 2011 04:01:19AM 20 points [-]

A currently living person doesn't want to die, but a potentially living person doesn't yet want to live, so there's an asymmetry between the two scenarios.

Comment author: Julia_Galef 30 November 2011 05:22:18PM 3 points [-]

I agree, and that's why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you're diverging from utilitarianism, what are you replacing it with?

Comment author: shokwave 29 November 2011 09:43:23AM 20 points [-]

If you want impact, use the narrative fallacy. What I mean is, use all of the other biases and fallacies you listed - tell a story about John, the guy who met a cool scientist guy when he was in primary school and now his life goal is to be a scientist. He decides to do work on global warming because 'what could be more important than this issue?' He expects to live in the city, be the head of a big lab... But he's not very good at global warming science (maybe he's not very good at research?), and he doesn't seem to notice that the advice his colleagues give him isn't helping. So he sticks to his guns because he's already got a degree in global warming, but he's always stressing about not having a job...

And so on.

And then rewind. John discovers rationality when he's a young adult, and becomes John-prime. Compare John to John-prime, whose rationality training allows him to recognise the availability bias at work on his dream of being a scientist, and since scholarship is a virtue, he researches, interviews... discovers that politics is a much better fit! His rationality informs him that the most important thing is improving quality of life, not global warming or power, so he donates to third-world charities and ensures when he runs for political positions he does so on a platform of improving social welfare and medical access. His rationality lets him evaluate advice-givers, and he manages to see through most of the self-serving advice - and when he finds a mentor who seems genuine, he sticks to that mentor, improving his success in politics...

And so on.

(And then the punchline: explain why this story makes the audience feel like rationality is important with a description of the narrative bias!)

Comment author: Julia_Galef 30 November 2011 05:10:41AM 2 points [-]

Excellent.

Comment author: [deleted] 30 November 2011 02:09:08AM 0 points [-]

Why does one have to be better than the other?

In response to comment by [deleted] on Life Extension versus Replacement
Comment author: Julia_Galef 30 November 2011 03:23:51AM 3 points [-]

One doesn't have to be better than the other. That's what's in dispute.

I think making this comparison is important philosophically, because of the implications our answer has for other utilitarian dilemmas, but it's also important practically, in shaping our decisions about how to allocate our efforts to better the world.

Comment author: steven0461 30 November 2011 02:11:05AM *  2 points [-]

I really need to fix my blog archive, but I discussed this in the post at the top of this page.

Comment author: Julia_Galef 30 November 2011 03:19:24AM 0 points [-]

Thanks -- but if I'm reading your post correctly, your arguments hinge on the utility experienced in Life Extension being greater than that in Replacement. Is that right? If I stipulate that the utility is equal, would your answer change?

Comment author: [deleted] 30 November 2011 02:20:18AM 2 points [-]

Please don't do that. OP's comment doesn't make any sense now.

In response to comment by [deleted] on Life Extension versus Replacement
Comment author: Julia_Galef 30 November 2011 02:52:41AM 1 point [-]

Ah, true! I edited it again to include the original setup, so that people will know what Logos01 and drethelin are referring to.

Comment author: drethelin 30 November 2011 02:07:29AM 8 points [-]

response a) My life gets better with each year I live. I learn new things and make new friends. 2 people who live 12 years will not have the same amount of happiness as I will on my birthday, when I turn 24. I see no reason why the same should not hold for even longer lifespans.

Response b) I privilege people that already exist over people who do not exist. A person living 800 years is more valuable to me EVEN if you say the same amount of happiness happens in both cases. I care about existing people being happy, and about not creating sad people, but I don't particularly care about creating new happy entities unless it's necessary for the perpetuation of humanity, which is something I value.

response c) The personal response, I value my own happiness significantly higher than that of other people. 1 year of my own life is worth more to me than 1 year of someone else's life. If my decision was between creating 10 people as happy as I am or making myself 10 times happier, I will make myself 10 times happier.

Finally, you don't seem to realize what is meant by caring about average utility. In your scenario, the TOTAL years lived remains the same in both cases, but the AVERAGE utility goes far down in the second case. 80 years per person is a lot less than 800 years per person.

Comment author: Julia_Galef 30 November 2011 02:19:31AM 1 point [-]

Thanks -- I fixed the setup.

Comment author: Vladimir_Nesov 29 November 2011 06:51:46AM *  0 points [-]

Rationality alerts you when you have a false belief that’s making you worse off.

The framing of this section (not just the title) makes it look like an invitation to motivated skepticism, which is generally a bad idea.

Comment author: Julia_Galef 29 November 2011 07:03:17AM 1 point [-]

My framing was meant to be encouraging you to disproportionately question beliefs which, if false, make you worse off. But motivated skepticism is disproportionately questioning beliefs that you want to be false. That's an important difference, I think.

Are you claiming that my version is also a form of motivated skepticism (perhaps a weaker form)? Or do you think my version's fine, but that I need to make it clearer in the text how what I'm encouraging is different from motivated skepticism?

Comment author: [deleted] 26 November 2011 09:08:55PM 3 points [-]

Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn't too far off-base.

I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.

Comment author: Julia_Galef 26 November 2011 10:36:15PM *  9 points [-]

Incidentally, the filmmaker didn't capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I've uploaded it.

The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png

My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png

*I realize now that I need this modifier, at least on Less Wrong!

Comment author: Vladimir_Nesov 26 November 2011 07:43:23AM *  7 points [-]

In the talk, rationality is positioned as something that decides which of System 1 or System 2 should be used in a particular situation. But that's straw System 2! What's actually happening is that your System 2 is smart enough to get input from System 1 and update on it appropriately (or conversely, as in taking ideas seriously on an intuitive level).

Comment author: Julia_Galef 26 November 2011 10:14:49PM *  3 points [-]

Great point, in many cases, such as when you're trying to decide what school to go to, and you make the decision deliberatively but taking into account the data from your intuitive reactions to the schools.

But in other cases, such as chess-playing, aren't you mainly just deciding based on your System 1 judgments? (Admittedly I'm no chess player; that's just my impression of how it works.)

I agree you need to use System 2 for your meta-judgment about which system to use in a particular context, but once you've made that meta-judgment, I think there are some cases in which you make the actual judgment based on System 1.

Am I correctly understanding your point?

Comment author: [deleted] 26 November 2011 07:25:16PM 2 points [-]

The most obvious pro of using "intuition" and "deductive reasoning" would be that it would be a smaller cognitive leap for the audience to follow. For example, when you say "System 1" the audience has to translate: System 1-> Intuition -> One of the 2 types of reasoning

Just saying "intuition" removes the need for this extra step.

A possible pro of using "system 1" and "system 2" is that it might allow the audience to distance themselves from any emotional reactions they might have to the ideas of intuition and deduction.

Comment author: Julia_Galef 26 November 2011 10:00:29PM 3 points [-]

Yup, I went through the same reasoning myself -- I decided on "system 1" and "system 2" for their neutral tone, and also because they're Stanovich's preferred terms.

View more: Prev | Next