Wei_Dai comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 08 August 2009 11:25:01PM 5 points [-]

Omega's a bastard. So what?

That's fine, I just didn't know if that detail had some implication that I was missing.

WHAT? Are you honestly sure you're THAT not as altruistic as I am?

Yes, I'm pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I'm surprised that you're so surprised.

It seems that I had missed the earlier posts on bounded vs. unbounded utility functions. I'll follow up there to avoid retreading old ground.

Comment author: Eliezer_Yudkowsky 09 August 2009 06:20:39PM 10 points [-]

Yes, I'm pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I'm surprised that you're so surprised.

I'm shocked, and I hadn't thought that most people had preferences like yours - at least would not verbally express such preferences; their "real" preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.

I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend's funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?

Or if I'm asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings... so I'm really not clear on what could be going on here.

I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you'd seen enough of them, you would agree - the visceral emotional force having become great enough. In the latter case I'm not sure what's going on.

PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?

Comment author: Wei_Dai 09 August 2009 09:57:02PM 8 points [-]

I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend's funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?

1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:

  • Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can't watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
  • No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
  • My current position resembles the "Proximity argument" from Revisiting torture vs. dust specks:

Proximity argument: don't ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering.

This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.

I guess my most important question would be: Do you feel that way, or are you deciding that way?

My last bullet above already answered this, but I'll repeat for clarification: it's both.

PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?

This should be clear from my answers above as well, but yes.

Comment author: cousin_it 12 August 2009 01:57:26PM *  5 points [-]

Oh, 'ello. Glad to see somebody still remembers the proximity argument. But it's adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.

About Eliezer's question, I'd exchange my life for a reliable 0.001 chance of healing reality, because I can't imagine living meaningfully after being offered such a wager and refusing it. Can't imagine how I'd look other LW users in the eye, that's for sure.

Comment author: Wei_Dai 13 August 2009 09:23:26AM 6 points [-]

Can't imagine how I'd look other LW users in the eye, that's for sure.

I publicly rejected the offer, and don't feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?

Comment author: Vladimir_Nesov 12 August 2009 02:34:19PM 2 points [-]

Cooperation is a different consideration from preference. You can prefer only to keep your own "body" in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.

Comment author: conchis 12 August 2009 02:22:10PM *  3 points [-]

No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).

I'd be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.