Kawoomba comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong

44 Post author: wdmacaskill 15 November 2012 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (182)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 03 December 2012 07:11:28AM 0 points [-]

if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Not true as a general statement, not if you're maximizing your expected utility gain.

Also, "if"? One often attaches utility based on ... attachment. Do you think there's more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents "evil" in that regard?

Comment author: MugaSofer 04 December 2012 01:23:57PM 0 points [-]

Are most all parents "evil" in that regard?

I believe the technical term is "biased".

Comment author: Kawoomba 04 December 2012 02:22:51PM 0 points [-]

In the same way that I'm "biased" towards yogurt-flavored ice-cream. You can call any preference you have a "bias", but since we're here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.

What's your basis for objecting against utility functions that are "biased" (you introduced the term "evil") in the sense of favoring your own children over random other children?

Comment author: MugaSofer 04 December 2012 02:34:49PM -2 points [-]

No, I'm claiming that parents don't actually have a special case in their utility function, they're just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.

Comment author: Vladimir_Nesov 04 December 2012 04:02:22PM 0 points [-]

It seems like a possibility, but I don't think it's possible to clearly know that it's the case, and so it's an error to "claim" that it's the case ("claiming" sounds like an assertion of high degree of certainty). (You do say that it's a "reasonable hypothesis", but then what do you mean by "claiming"?)

Comment author: MugaSofer 10 December 2012 05:35:52PM 0 points [-]

Up until this point, I had never seen any evidence to the contrary. I'm still kinda puzzled at the amount of disagreement I'm getting ...

Comment author: Kawoomba 04 December 2012 02:52:51PM 0 points [-]

Clear preferences that are not part of their utility function? And which supposedly are evil, or "biased", with the negative connotations of "bias" included?

What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?

Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents' utility function?

Sorry about the incredulity, but that's the strangest apparently honestly held opinion I've read on LW in a long time. I'm probably misunderstanding your position somehow.

Comment author: MugaSofer 04 December 2012 03:04:40PM 0 points [-]

Are you serious that valuing your own kids over other kids is a bias to be overcome

In a triage situation? Yes.

Comment author: Kawoomba 04 December 2012 03:53:05PM 1 point [-]

In a triage situation? Yes.

Even if you're restricting your assertion to special cases, let's go with that.

Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?

What makes that an "evil" bias, as opposed to an ubiquitous aspect of most parents' utility functions?

Comment author: BerryPick6 04 December 2012 03:55:59PM 1 point [-]

Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it's just a "shut up and multiply" kind of thing...

Comment author: Vladimir_Nesov 04 December 2012 04:06:58PM *  2 points [-]

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

This assumption is excluded by Kawoomba's "but which I do not care about as much", so isn't directly relevant at this point (unless you are making a distinction between "caring" and "utility", which should be more explicit).

Comment author: BerryPick6 04 December 2012 04:12:21PM 0 points [-]

I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.

Comment author: thomblake 04 December 2012 04:11:39PM 1 point [-]

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

If you've found a way to aggregate utility across persons, I'd like to hear it.

Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor's child, that is reflected in her utility function. What other standard are you trying to invoke?

Comment author: BerryPick6 04 December 2012 04:13:32PM 0 points [-]

Ah, this clears up things a bit for me, thank you.

Comment author: Kawoomba 04 December 2012 04:06:33PM 0 points [-]

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

Comment author: BerryPick6 04 December 2012 04:09:31PM -2 points [-]

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

What reason do you have for aiming to satisfy you own utility function, or that of your family's?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

I'm afraid this is a little too much lingo for me. Sorry.

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

You'd have to taboo "evil" before I can answer this question.

Comment author: Kindly 04 December 2012 06:32:07PM -1 points [-]

The nearest I can come to making sense of your claim is that it's some sort of imaginary Prisoner's Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.

However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.

I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children's lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it's still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you're arguably not optimizing for humans anymore.

Comment author: MugaSofer 10 December 2012 06:13:47PM *  0 points [-]

My assertion is that all humans share utility - which is the standard assumption in ethics, and seems obviously true - and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.

Comment author: Kawoomba 12 December 2012 09:04:44AM 0 points [-]

Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.

It makes no sense to say "my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise", it's a contradictio in terminis.

We should be very careful with ethical assumptions that seem "obviously true". Especially when they are not (true as in "common", it wouldn't make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong". Your proposed "obviously true" ethical assumption is also based on "evopsych". You're trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.

There is no epistemological truth in terminal values.

Comment author: MugaSofer 12 December 2012 09:35:54AM 0 points [-]

parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong".

No.

Humans regularly act against their own ethics, whether due to misinformation or bias, akrasia, or cached thoughts about morality.

... are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people? Perhaps I have misunderstood your point.