MugaSofer comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (182)
I believe the technical term is "biased".
In the same way that I'm "biased" towards yogurt-flavored ice-cream. You can call any preference you have a "bias", but since we're here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.
What's your basis for objecting against utility functions that are "biased" (you introduced the term "evil") in the sense of favoring your own children over random other children?
No, I'm claiming that parents don't actually have a special case in their utility function, they're just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.
It seems like a possibility, but I don't think it's possible to clearly know that it's the case, and so it's an error to "claim" that it's the case ("claiming" sounds like an assertion of high degree of certainty). (You do say that it's a "reasonable hypothesis", but then what do you mean by "claiming"?)
Up until this point, I had never seen any evidence to the contrary. I'm still kinda puzzled at the amount of disagreement I'm getting ...
Clear preferences that are not part of their utility function? And which supposedly are evil, or "biased", with the negative connotations of "bias" included?
What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?
Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents' utility function?
Sorry about the incredulity, but that's the strangest apparently honestly held opinion I've read on LW in a long time. I'm probably misunderstanding your position somehow.
In a triage situation? Yes.
Even if you're restricting your assertion to special cases, let's go with that.
Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
What makes that an "evil" bias, as opposed to an ubiquitous aspect of most parents' utility functions?
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it's just a "shut up and multiply" kind of thing...
This assumption is excluded by Kawoomba's "but which I do not care about as much", so isn't directly relevant at this point (unless you are making a distinction between "caring" and "utility", which should be more explicit).
I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.
Taboo "utility function", and "Kawoomba cares about Kawoomba's utility function" would resolve into the tautologous "Kawoomba is motivated by whatever it is that motivates Kawoomba". The subtler problem is that it's not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn't (including those made by Kawoomba) may be unfounded. To the extent "utility function" refers to idealized extrapolated volition, rather than present desires, people won't already have good understanding of even their own "utility function".
If you've found a way to aggregate utility across persons, I'd like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor's child, that is reflected in her utility function. What other standard are you trying to invoke?
Ah, this clears up things a bit for me, thank you.
Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?
Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?
Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?
What reason do you have for aiming to satisfy you own utility function, or that of your family's?
I'm afraid this is a little too much lingo for me. Sorry.
You'd have to taboo "evil" before I can answer this question.
Um, it's my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating "other preferences" that can overrule my utility function would be a contradiction in terms.
The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a "bias" and as part of your utility function, and who introduced the whole "evil" thing.
The nearest I can come to making sense of your claim is that it's some sort of imaginary Prisoner's Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children's lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it's still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you're arguably not optimizing for humans anymore.
My assertion is that all humans share utility - which is the standard assumption in ethics, and seems obviously true - and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.
Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.
It makes no sense to say "my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise", it's a contradictio in terminis.
We should be very careful with ethical assumptions that seem "obviously true". Especially when they are not (true as in "common", it wouldn't make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong". Your proposed "obviously true" ethical assumption is also based on "evopsych". You're trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.
There is no epistemological truth in terminal values.
No.
Humans regularly act against their own ethics, whether due to misinformation or bias, akrasia, or cached thoughts about morality.
... are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people? Perhaps I have misunderstood your point.
It seems obviously true that the moralities people implement are often internally inconsistent. It also seems obviously true that people can talk about imperatives they feel derive from one horn or the other of an inconsistent moral system, without either lying or being wrong as such.
The inconsistency might resolve itself with new information, but it's going to inform any statements we make about the moral system it exists in until that information arrives.
I am saying that the statement "a racist wants that which he/she wants" is tautologically true. There is no objective "right" or "wrong" when comparing utility functions, there is just "this utility function values X and Y, this other utility function values X and Z, they are compatible in respect to X, they are incompatible in respect to Y".
Certainly what we value changes all the time. But that's just change, it's not becoming "less wrong" or "wronger". Instead, it may be "more (/less) compatible with commonly shared elements of western utility functions" (which still fluctuate across time and culture, and species).