MugaSofer comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (182)
Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.
Not true as a general statement, not if you're maximizing your expected utility gain.
Also, "if"? One often attaches utility based on ... attachment. Do you think there's more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents "evil" in that regard?
I believe the technical term is "biased".
In the same way that I'm "biased" towards yogurt-flavored ice-cream. You can call any preference you have a "bias", but since we're here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.
What's your basis for objecting against utility functions that are "biased" (you introduced the term "evil") in the sense of favoring your own children over random other children?
No, I'm claiming that parents don't actually have a special case in their utility function, they're just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.
It seems like a possibility, but I don't think it's possible to clearly know that it's the case, and so it's an error to "claim" that it's the case ("claiming" sounds like an assertion of high degree of certainty). (You do say that it's a "reasonable hypothesis", but then what do you mean by "claiming"?)
Up until this point, I had never seen any evidence to the contrary. I'm still kinda puzzled at the amount of disagreement I'm getting ...
Clear preferences that are not part of their utility function? And which supposedly are evil, or "biased", with the negative connotations of "bias" included?
What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?
Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents' utility function?
Sorry about the incredulity, but that's the strangest apparently honestly held opinion I've read on LW in a long time. I'm probably misunderstanding your position somehow.
In a triage situation? Yes.
Even if you're restricting your assertion to special cases, let's go with that.
Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
What makes that an "evil" bias, as opposed to an ubiquitous aspect of most parents' utility functions?
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it's just a "shut up and multiply" kind of thing...
This assumption is excluded by Kawoomba's "but which I do not care about as much", so isn't directly relevant at this point (unless you are making a distinction between "caring" and "utility", which should be more explicit).
If you've found a way to aggregate utility across persons, I'd like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor's child, that is reflected in her utility function. What other standard are you trying to invoke?
Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?
Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?
Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?
The nearest I can come to making sense of your claim is that it's some sort of imaginary Prisoner's Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children's lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it's still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you're arguably not optimizing for humans anymore.
My assertion is that all humans share utility - which is the standard assumption in ethics, and seems obviously true - and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.
Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.
It makes no sense to say "my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise", it's a contradictio in terminis.
We should be very careful with ethical assumptions that seem "obviously true". Especially when they are not (true as in "common", it wouldn't make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong". Your proposed "obviously true" ethical assumption is also based on "evopsych". You're trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.
There is no epistemological truth in terminal values.
Another situation that has some parallels and may be relevant to the discussion.
Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.
It may be caused by the fact that I'm partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I'd guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.
That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".
But my point is that:
a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".
b) As far as I understand, homo sapiens do generally actually have such an attitude - evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.
c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I'd be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids - even if I wouldn't love them, I'd still have that duty.
My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.