Viliam_Bur comments on Privileging the Question - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (311)
All the examples of privileged questions given are disguised manifestations of moral uncertainty
is the struggle between a morality that favors equality, and one that has a certain set of values surrounding purity and/or respect for religious authority.
is the struggle between individual autonomy vs. harm avoidance
is the struggle between in-group preference and lack thereof
The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important. If someone is against gay marriage and stem cells, how do you expect them to react to trans-humanist memes, life extension, and the AI?
When society makes a decision about the morality of gay marriage and stems cells, they have also gone part of the way to making a decision about AI, since a lot of the same moral circuitry is going to be involved.
Side comment: Can anyone find an example of a "privileged" question which isn't a disguised moral struggle?
Isn't moral strugle a part of how mindkilling feels from inside?
Also, compare these two questions:
a) Should gay marriage be legal?
b) How to optimize the society for more long-term utility for people of any sexual orientation?
Only the first one could get media attention. And it's not because the second one is less moral.
You can't even ask this question until you arrive at utilitarianism as a moral philosophy. A person with moral objections against homosexual marriage isn't a utilitarian by definition, since they care about additional things (purity, respect for authority, etc) which have nothing to do with increasing everyone's utility..
When you ask "how to maximize utility", you have already assumed that the moral struggle between harm/care and purity has been settled in favor of harm/care. Otherwise, you would be asking about how to maximize utility while also keeping people from "defiling" themselves.
As mare-of-night reminded us elsewhere in-thread, even Clippy is a utilitarian. There's nothing special about paperclips or purity that prevents them from being included in someone's definition of utility.
On the other hand, even if your post boils down to "my definition of utility is the correct global definition", that's no more wrong than Viliam_Bur's treating "utility for people" as a well-defined term without billions of undetermined coefficients.
So the original question was:
Under classical preference utilitarianism, you try to maximize everyone's utility and conveniently ignore the problems of putting two utility functions into one equation, and the problems you mention.
Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.
We still include the purity thing in the calculations of course. For example, I could in principle argue that the negative utility from allowing sex in public probably outweighs the positive utility generated from the removal of the restriction, hence our public obscenity laws.
That ignores the possibility that there is a reason those purity boundaries were there in the first place.
I've seen this before, but I can't say I find it a compelling argument - if an institution was placed for good reason, then at least someone, somewhere would remember why it was placed and could give a compelling argument. If no one can do so, the risk of some, hidden drawback which the original lawmaker could have forseen seems too small to count.
I mean, this argument does apply when you are acting alone, on some question that neither you nor anyone you come into contact with knows anything about...but it doesn't apply to something like this.
How do utilitarians decide to draw the boundary at the whole human race rather than some smaller set of humans?
II'm not sure if I understand your question...
Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.
Is that a deontological standard?
The reason I asked is that, in principle, you could have utilitarianism based on some group smaller than the human race.
For some people, probably. Let's take a step back.
Morality comes from the "heart". It's made of feelings. Utilitarianism (and much of what falls under moral philosophy) is one of many attempt to make a consistent set of rules to describe inconsistent feelings. The purpose of making a consistent set of rules is 1) to convince others of the morality of an action and 2) because we morally feel aversion to hypocrisy and crave moral consistency.
Keeping those aims in mind, drawing the line across all humans, sentient beings, etc has the following benefits:
1) The creators might feel that the equation describes the way they feel better when they factor in all humans. They might hold it as a deontological standard to care about all humans, or they might feel a sense of fairness, or they might have empathy for everyone, etc.
2) Drawing the line across all humans allows you to use the utilitarian standard to negotiate compromises with any arbitrary human you come across. Many humans, having the feelings described in [1], will instinctively accept utilitarianism as a valid way to think about things.
There are plenty of things that are problematic here, but that is why utilitarianism defaults to include the whole human race. As with all things moral, that's just an arbitrary choice on our part, and we could easily have done it a different way. We can restrict it to a smaller subset of humans, we can broaden it to non-human things which seem agent-like enough to be worth describing with a utility function, etc. Many utilitarians include animals, for example.
People use feelings/System1 to do morality. That doesn't make it an oracle. Thinking might be more accurate.
If you don't know how to solve a problem, you guess. But that doens't mean anything goes. Would anyone include rocks in the Circle? Probably not, since they don't have feelings, values, or preferences. So there seem to be some constraints.
You could also, in principle, have a utilitarianism that gives unequal weights to different people. I've asked around here for a reason to think that the egalitarian principle is true, but haven't yet received any responses that are up to typical Less Wrong epistemic standards.
It's a very clear Schelling point. At least until advances in uplifting/AI/brain emulation/etc. complicates the issue of what counts as a human.
You're applying moral realism here...as in, you are implying that moral facts exist objectively, outside of a human's feelings. Are you dong this intentionally?
Your alternative would be to think an aristocratic or meritocratic principle is true. (It's either equal or unequal, right?)
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote...
This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.
From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.
But put something up that assumes a dis-egalitarian principle and see how it flies. I'd be interested to see if you can come up with something plausible that is dis-egalitarian and up to epistemic scratch...
Hint: plutocracy...
Ummmmm... do I draw the line around the whole of the human race? I'm not sure whether I do or not. I do know that there is a certain boundary (defined mostly by culture) where I get much more likely to say 'that's your problem' and become much less skeptical/cynical about preferences, although issues that seem truly serious always get the same treatment.
For some reason, choosing to accept that somebody's utility function might be very different from your own feels kind of like abandoning them from the inside. (Subjective!).
Considering many of them profess to include other kinds of intelligence, at least in theory ... it seems to be mostly a consistency thing. Why shouldn't I include Joe The Annoying Git?