someonewrongonthenet comments on Privileging the Question - Less Wrong

102 Post author: Qiaochu_Yuan 29 April 2013 06:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (311)

You are viewing a single comment's thread.

Comment author: someonewrongonthenet 29 April 2013 03:43:24AM *  1 point [-]

All the examples of privileged questions given are disguised manifestations of moral uncertainty

should gay marriage be legal?

is the struggle between a morality that favors equality, and one that has a certain set of values surrounding purity and/or respect for religious authority.

Should Congress pass stricter gun control laws?

is the struggle between individual autonomy vs. harm avoidance

Should immigration policy be tightened or relaxed?

is the struggle between in-group preference and lack thereof

The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important. If someone is against gay marriage and stem cells, how do you expect them to react to trans-humanist memes, life extension, and the AI?

When society makes a decision about the morality of gay marriage and stems cells, they have also gone part of the way to making a decision about AI, since a lot of the same moral circuitry is going to be involved.


Side comment: Can anyone find an example of a "privileged" question which isn't a disguised moral struggle?

Comment author: Viliam_Bur 29 April 2013 06:40:55AM 7 points [-]

Isn't moral strugle a part of how mindkilling feels from inside?

Also, compare these two questions:

a) Should gay marriage be legal?

b) How to optimize the society for more long-term utility for people of any sexual orientation?

Only the first one could get media attention. And it's not because the second one is less moral.

Comment author: someonewrongonthenet 29 April 2013 06:23:21PM *  4 points [-]

How to optimize the society for more long-term utility for people of any sexual orientation?

You can't even ask this question until you arrive at utilitarianism as a moral philosophy. A person with moral objections against homosexual marriage isn't a utilitarian by definition, since they care about additional things (purity, respect for authority, etc) which have nothing to do with increasing everyone's utility..

When you ask "how to maximize utility", you have already assumed that the moral struggle between harm/care and purity has been settled in favor of harm/care. Otherwise, you would be asking about how to maximize utility while also keeping people from "defiling" themselves.

Comment author: roystgnr 29 April 2013 11:53:27PM 2 points [-]

As mare-of-night reminded us elsewhere in-thread, even Clippy is a utilitarian. There's nothing special about paperclips or purity that prevents them from being included in someone's definition of utility.

On the other hand, even if your post boils down to "my definition of utility is the correct global definition", that's no more wrong than Viliam_Bur's treating "utility for people" as a well-defined term without billions of undetermined coefficients.

Comment author: someonewrongonthenet 30 April 2013 12:30:39AM *  2 points [-]

So the original question was:

How to optimize the society for more long-term utility for people of any sexual orientation?

Under classical preference utilitarianism, you try to maximize everyone's utility and conveniently ignore the problems of putting two utility functions into one equation, and the problems you mention.

Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.

We still include the purity thing in the calculations of course. For example, I could in principle argue that the negative utility from allowing sex in public probably outweighs the positive utility generated from the removal of the restriction, hence our public obscenity laws.

Comment author: Eugine_Nier 01 May 2013 03:02:53AM 3 points [-]

Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.

That ignores the possibility that there is a reason those purity boundaries were there in the first place.

Comment author: someonewrongonthenet 01 May 2013 05:12:19AM *  -1 points [-]

I've seen this before, but I can't say I find it a compelling argument - if an institution was placed for good reason, then at least someone, somewhere would remember why it was placed and could give a compelling argument. If no one can do so, the risk of some, hidden drawback which the original lawmaker could have forseen seems too small to count.

I mean, this argument does apply when you are acting alone, on some question that neither you nor anyone you come into contact with knows anything about...but it doesn't apply to something like this.

Comment author: NancyLebovitz 30 April 2013 08:44:30PM 0 points [-]

How do utilitarians decide to draw the boundary at the whole human race rather than some smaller set of humans?

Comment author: someonewrongonthenet 01 May 2013 05:15:18AM 0 points [-]

II'm not sure if I understand your question...

Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.

Comment author: NancyLebovitz 01 May 2013 05:47:21AM 1 point [-]

Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.

Is that a deontological standard?

The reason I asked is that, in principle, you could have utilitarianism based on some group smaller than the human race.

Comment author: someonewrongonthenet 01 May 2013 07:13:29PM *  1 point [-]

Is that a deontological standard?

For some people, probably. Let's take a step back.

Morality comes from the "heart". It's made of feelings. Utilitarianism (and much of what falls under moral philosophy) is one of many attempt to make a consistent set of rules to describe inconsistent feelings. The purpose of making a consistent set of rules is 1) to convince others of the morality of an action and 2) because we morally feel aversion to hypocrisy and crave moral consistency.

Keeping those aims in mind, drawing the line across all humans, sentient beings, etc has the following benefits:

1) The creators might feel that the equation describes the way they feel better when they factor in all humans. They might hold it as a deontological standard to care about all humans, or they might feel a sense of fairness, or they might have empathy for everyone, etc.

2) Drawing the line across all humans allows you to use the utilitarian standard to negotiate compromises with any arbitrary human you come across. Many humans, having the feelings described in [1], will instinctively accept utilitarianism as a valid way to think about things.

There are plenty of things that are problematic here, but that is why utilitarianism defaults to include the whole human race. As with all things moral, that's just an arbitrary choice on our part, and we could easily have done it a different way. We can restrict it to a smaller subset of humans, we can broaden it to non-human things which seem agent-like enough to be worth describing with a utility function, etc. Many utilitarians include animals, for example.

Comment author: Juno_Watt 01 May 2013 08:58:06PM 1 point [-]

Morality comes from the "heart". It's made of feelings.

People use feelings/System1 to do morality. That doesn't make it an oracle. Thinking might be more accurate.

As with all things moral, that's just an arbitrary choice on our part

If you don't know how to solve a problem, you guess. But that doens't mean anything goes. Would anyone include rocks in the Circle? Probably not, since they don't have feelings, values, or preferences. So there seem to be some constraints.

Comment author: Jayson_Virissimo 01 May 2013 06:30:08AM *  1 point [-]

You could also, in principle, have a utilitarianism that gives unequal weights to different people. I've asked around here for a reason to think that the egalitarian principle is true, but haven't yet received any responses that are up to typical Less Wrong epistemic standards.

Comment author: Eugine_Nier 02 May 2013 12:26:15AM 3 points [-]

I've asked around here for a reason to think that the egalitarian principle is true, but haven't yet received any responses that are up to typical Less Wrong epistemic standards.

It's a very clear Schelling point. At least until advances in uplifting/AI/brain emulation/etc. complicates the issue of what counts as a human.

Comment author: someonewrongonthenet 01 May 2013 07:30:40PM -1 points [-]

I've asked around here for a reason to think that the egalitarian principle is true ... is true

You're applying moral realism here...as in, you are implying that moral facts exist objectively, outside of a human's feelings. Are you dong this intentionally?

Comment author: seanwelsh77 01 May 2013 09:21:15AM -2 points [-]

Your alternative would be to think an aristocratic or meritocratic principle is true. (It's either equal or unequal, right?)

I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.

Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote...

This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.

From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.

But put something up that assumes a dis-egalitarian principle and see how it flies. I'd be interested to see if you can come up with something plausible that is dis-egalitarian and up to epistemic scratch...

Hint: plutocracy...

Comment author: ikrase 01 May 2013 06:30:34AM 0 points [-]

Ummmmm... do I draw the line around the whole of the human race? I'm not sure whether I do or not. I do know that there is a certain boundary (defined mostly by culture) where I get much more likely to say 'that's your problem' and become much less skeptical/cynical about preferences, although issues that seem truly serious always get the same treatment.

For some reason, choosing to accept that somebody's utility function might be very different from your own feels kind of like abandoning them from the inside. (Subjective!).

Comment author: MugaSofer 01 May 2013 07:04:45PM -2 points [-]

Considering many of them profess to include other kinds of intelligence, at least in theory ... it seems to be mostly a consistency thing. Why shouldn't I include Joe The Annoying Git?

Comment author: Qiaochu_Yuan 29 April 2013 06:40:59AM 0 points [-]

The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important.

Ask the counter-question: what do you plan to do once you've settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?

I agree that people have different opinions about the relative value of different moral concerns. What I'm pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.

Can anyone find an example of a "privileged" question which isn't a disguised moral struggle?

If you wanted to be really pessimistic about mathematics research, you could argue that most of pure math research consists of privileged questions.

Comment author: someonewrongonthenet 29 April 2013 06:49:51PM *  2 points [-]

Ask the counter-question: what do you plan to do once you've settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?

Of course! I have to change my behavior to be in accord with my new-found knowledge about my preferences. A current area of moral uncertainty for me revolves around the ethics of eating meat, which is motivating me to do research on the intelligence of various animals. As a result, the bulk of my meat consumption has shifted from more intelligent/empathetic animals (pigs) to less intelligent animals (shrimp, fish, chicken).

Through discussion, I've also influenced some friends into having more socially liberal views, thus changing the nature of their interpersonal interactions. If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.

You can't escape discussing the fundamental moral questions if those moral struggles create disagreement about which action should be taken.

I agree that people have different opinions about the relative value of different moral concerns. What I'm pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.

I do think that it might be better to focus on the underlying moral values rather than the specific examples.

Comment author: ChristianKl 03 May 2013 01:03:30PM -1 points [-]

If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.

Since GiveWell hasn't found any good charities that provide abortions and give out contraceptives the answer in this community is probably: "No, charity shouldn't do those things."

That's however a very different discussion from mainstream US discussion over the status of abortion.

Comment author: wedrifid 03 May 2013 01:14:15PM 3 points [-]

Since GiveWell hasn't found any good charities that provide abortions and give out contraceptives the answer in this community is probably: "No, charity shouldn't do those things."

Did an 'is' just morph into a 'should' there somehow?

Comment author: Decius 07 May 2013 09:56:57PM 2 points [-]

Since GiveWell hasn't found any good charities that provide abortions and give out contraceptives the answer in this community is probably: "No, charity shouldn't do those things."

Or "There is not an existing charity which does those things well enough to donate towards."

"Givewell hasn't found any good charities that do X" does not imply "Charity should not do X"

Comment author: someonewrongonthenet 03 May 2013 06:55:25PM *  2 points [-]

We are talking about the mainstream US here.

Qiaochu_Yuan's argument was that debates over abortion are privileged questions (discussed disproportionately to the value of answering them).

I added that while this is true in regards to the specific nature of the questions, the underlying moral uncertainty that the questions represent (faced by the US population - lesswrong is pretty settled here) is one that is valuable to discuss for the population at large because it effects how they behave.

Givewell isn't worrying about moral uncertainty - they've already settled approximately on utilitarianism. Not so for the rest of the population.

Comment author: Qiaochu_Yuan 29 April 2013 06:56:35PM -1 points [-]

Cool. I've been having second thoughts about eating pigs as well.

Comment author: [deleted] 03 May 2013 01:36:36PM 1 point [-]

They don't seem to pass the mirror test (which has been my criteria for such things, even if flawed).