Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking

I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.

Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"

Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.

Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!

Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1

Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a "B" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me  and saying "Hey, did you hear about this, this proves we've been right all along!" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.

And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying "Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!" and so on, and the Republicans were trying to bury it as quickly as possible.

The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2

So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.

Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.

But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.

Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.

In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category "drugs"3, and we've got to call it either "good" or "bad", then "bad" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?

So how do we avoid all of these problems?

I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it.

Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.

When people say "Utilitarianism says slavery is bad" or "Utilitarianism says murder is wrong" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is "In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so" and possibly "and the same would be true of any broadly similar situation".

But why in blue blazes can't we just go ahead and say "slavery is bad"? What could possibly go wrong?

Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4

(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)

Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word "good" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.

I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like "I dunno, the two-state solution or something?". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.

In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.

Footnotes:

1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.

2: More evidence: we tend to like the idea of "good" or "bad" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.

3: Of course, the battle has already been half-lost once you have a category "drugs". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category "drugs", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.

4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.

The Trouble With "Good"
New Comment
137 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This very good post! Yay Yvain! You have high karma. Please give me stock advice.

I know a guy who constructed a 10-dimensional metric space for English words, then did PCA on it. There were only 4 significant components: good-bad, calm-exciting, open-closed, basic-elaborate. They accounted for 65%, 20%, 9%, and 5% of the variance in the 10-dimensional space, leaving 1% for everything else. This means that we need only 8 adjectives in English 99% of the time.

So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt.

This could be explored more deeply in another post.

9Scott Alexander
Sorry, I didn't see this until today. Can you give me a link to some more formal description of this? I don't understand how you would use a ten dimensional metric space to capture English words without reducing them to a few broad variables, which seems to be what he's claiming as a result.
4hylleddin
This is a long time after the fact, but I found this.
8jmmcd
Awesome
1Peter_de_Blanc
Are you talking about Alexei Samsonovich? I saw a very similar experiment that he did.
0nazgulnarsil
I agree that it could use more exploration. I suspect that many of our biases stem from simple preference ranking errors.
1FlakAttack
I'm pretty sure I actually saw this in a philosophy textbook, which would mean there are likely observations or studies on the subject.
-9taiwanjohn
[-]MBlume220

Of course, the battle has already been half-lost once you have a category "drugs".

Especially since the category itself is determined by governmental fiat. I once saw an ad for employment at Philip Morris with a footnote to the effect that Philip Morris is a "drug-free workplace". I'm sure they've plenty of nicotine and caffeine there, they're simply using 'drugs' to mean "things to which the federal government has already said 'boo'"

Eliezer Yudkowsky currently has 2486 karma.

Ah, the good old days!

pizza is good, seafood is bad

When I say something is good or bad ("yay doggies!") it's usually a kind of shorthand:

pizza is good == pizza tastes good and is fun to make and share

seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once

yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.

I suspect when most people use the words 'good' and 'bad' they are using just this kind of linguistic compression. Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I'm not sure what you want me to conclude.

4jimrandomh
Exactly that. We may be able to recall our reasoning if we try to, but we're likely to throw in a few extra false justifications on top, and to forget about the other side.
1andrewc
OK, 'compression' is the wrong analogy as it implies that we don't lose any information. I'm not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the 'proof' into question.
6Paul Crowley
This drives me crazy when it happens to me. * Someone: "Shall we invite X?" * Me: "No, X is bad news. I can't remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X."
6arthurlewis
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years. Hmm, I guess that's why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies. Compression seems a fine analogy to me, as long as we're talking about mp3's and flv's, rather than zip's and tar's.
0[anonymous]
tar's are archived, not compressed. tar.gz's are compressed.
2whpearson
I think of it as memoisation rather than compression.
0AndySimpson
It may be useful shorthand to say "X is good", but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement "Bayes' Theorem is valid, true, and useful in updating probabilities" collapses into "Bayes' Theorem is good," we invite the abuse of Bayes' Theorem. So I wouldn't say it's always a bad thing, but I'd say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
3janos
Do you have some good examples of abuse of Bayes' theorem?
0AndySimpson
That is a good question for a statistician, and I am not a statistician. One thing that leaps to mind, however, is two-boxing on Newcomb's Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don't begin to understand suggests that either response to Newcomb's problem is defensible using Bayesian nets. There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion. Also, it's struck me that a frequentist statistician might call most Bayesian uses of the theorem "abuses." I'm not sure those are really good examples, but I hope they're satisfying.
0SoullessAutomaton
I suspect it's more likely that we won't remember it at all; we'd simply increase the association between the thing and goodness and, if looking for a reason, will rationalize one on the spot. Our minds are very good at coming up with explanations but not good at remembering details. Of course, if your values and knowledge haven't changed significantly, you'll likely confabulate something very similar to the original reasoning; but as the distance increases between the points of decision and rationalization, the accuracy is likely to drop.

I'm taking an entire course called "Weird Forms of Consequentialism", so please clarify - when you say "utilitarianism", do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?

9Scott Alexander
Uh.....er....maybe! I'm familiar with Bentham, Mill, Singer, Eliezer, and random snippets of utilitarian theory I picked up here and there. I'm not confident enough with my taxonomy to use quite so many adjectives with confidence. I will add that article to the list of things to read. I agree that your course sounds awesome. If you hear anything particularly enlightening, please turn it into an LW post.
6Alicorn
I may well do that! Thank you for asking. Edit: I just made a post linking something called two-tier consequentialism to a post of Eliezer's.
0[anonymous]
Seconded.
5Eliezer Yudkowsky
This sounds like an awesome course.

It is one. I am taking it for its awesomeness in spite of the professor being a mean person who considered it appropriate to schedule the class from seven to nine-thirty in the evening. (His scheduling decision and his meanness are separate qualities.)

0[anonymous]
Could you give us the course website?

The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.

It would be Bayesian evidence of the right sign. But its magnitude would be vanishingly tiny.

0AshwinV
Considering how many ways either outcome would result, im not really sure how P(supporter carves a B |obama is evil) would actually measure out

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.

Upvoted because of this bit. It's obvious in retrospect, but I hadn't made the connection between the two concepts previously.

We sometimes do it the opposite way on LW: We'll upvote something that we wouldn't normally if it has < 0 points, because we're seeking its appropriate level rather than just voting.

I don't know that anyone downvotes something because they think it's too popular. I have refrained from voting for things that I thought had enough upvotes.

1randallsquared
I don't think I have on LW, but on reddit I have downvoting things that seemed too popular, though I technically agreed with them, so it does happen.
-4rabidchicken
troll (){ Downvoted because it was already Upvoted. I hate being controlled by Affective death spirals. }

I saw a presentation where someone took thousands of English words, placed them in a high-dimensional space based on, I think, what other words they co-occurred with, ran PCA on this space, and analyzed the top 4 resulting dimensions. The top dimension was "good/bad".

8jmmcd
You already said so in this very thread :)
1Unnamed
This sounds like semantic differential research. The standard finding is three dimensions: good-bad (evaluation), strong-weak (potency), and active-passive (activity).
1AdeleneDawner
Do you remember the other three?
0MaxNanasy
See here

"Considered harmful" considered harmful.

[-][anonymous]30

How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma.

Clearly this means I'm better at philosophy than Eliezer (2009). But to be serious, this reminds me how I need to value the karma scores of articles differently according to when they where made. The effects are pretty big. Completely irrelevant discussion threads now routinely get over 20 votes, while some insightful old writing hovers around 10 or 15.

"If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it."

It wouldn't be contradictory for someone to assign high utility to the presence of drugs and low utility to their absence. What you really mean is upon reflection, most people would not do this.

Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.

"Should I eat this apple?" Becomes simply "how do I feel about eating this apple" (or otherwise it's simply meaningless). But really there are considerations that go into the answer other than mere feelings (for example, is the apple poisonous?).

Because utilitarianism has a theory of right action and a theory of value, I don't think it's compatible with emotivism. But I haven't read much in the literature detailing this particular question, as I don't read much currently about utilitarianism.

9Scott Alexander
Well, what's interesting about that comment is that our beliefs about our own justifications and actions are usually educated guesses and not privileged knowledge. Or consider Eliezer's post about the guy who said he didn't respect Eliezer's ideas because Eliezer didn't have a Ph.D, and then when Eliezer found a Ph.D who agreed with him, the guy didn't believe him either. My guess would be that we see the apple is poisonous and "downvote" it heavily. Then someone asks what we think of the apple, we note the downvotes, and we say it's bad. Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us. Which is probably that it's poisonous. See also: footnote 1
[-]pjeby180

Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us.

Don't blame the unconscious. It only makes up explanations when you ask for them.

My first lesson in this was when I was 17 years old, at my first programming job in the USA. I hadn't been working there very long, maybe only a week or two, and I said something or other that I hadn't thought through -- essentially making up an explanation.

The boss reprimanded me, and told me of something he called "Counter man syndrome", wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their own bullshit.

From then on, I never believed my own made-up explanations... at least not in the field of computers. Instead, I considered them as hypotheses.

So, it's not only a learnable skill, it can be learned quickly, at least by 17 year-old. ;-)

5vizikahn
When I had a job behind a counter, one of the rules was: "We don't sell 'I don't know'". We were encouraged to look things up as hard as possible, but it's easy to see how this turns into making things up. I'm going to use the term Counter man syndrome from now on.
0Scott Alexander
I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now. I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.
3pjeby
I'm pointing out that there is actually no difference between the two. Your "explainer" (I call it the Speculator, myself), just makes stuff up with no concern for the truth. All it cares about are plausibility and good self-image reflection. I don't see the Speculator as entirely unconscious, though. In fact, most of us tend to identify with the Speculator, and view its thoughts as our own. Or I suppose, you might say that the Speculator is an tool that we can choose to think with... and we tend to reach for it by default. Sometimes I refer to the other-than-conscious, or to non-conscious processes. But finer distinctions are useful at times, so I also refer to the Savant (non-verbal, sensory-oriented, single-stepping, abstraction/negation-free) and the Speculator (verbal, projecting, abstracting, etc.) I suppose it's open to question whether the Speculator is really "other-than-conscious", in that it sounds like a conscious entity, and we consciously tend to identify with it, in the absence of e.g. meditative or contemplative training.
0SoullessAutomaton
What makes you think the mental systems to construct either explanation would be different? Especially given the research showing that we have dedicated mental systems devoted to rationalizing observed events.
4orthonormal
Right. I think that most people hold the belief that their system of valuations is internally consistent (i.e. that you can't have two different descriptions of the same thing that are both complete, accurate, and assign different valences), which requires them (in theory) to confront moral arguments. I think of basic moral valuations as being one other facet of human perception: the complicated process by which we interpret sensory data to get a mental representation of objects, persons, actions, etc. It seems that one of the things our mental representation often includes is a little XML tag indicating moral valuation. The general problem is that these don't generally form a coherent system, which is why intelligent people throughout the ages have been trying to convince themselves to bite certain bullets. Your conscious idea of what consistent moral landscape lies behind these bits of 'data' inevitably conflicts with your immediate reactions at some point.
2conchis
I may be misintepreting, but I wonder whether Yvain's use of the word "emotivism" here is leading people astray. He doesn't seem to be committing himself to emotivism as a metaethical theory of what it means to say something is good, as much as an empirical claim about most people's moral psychology (that is, what's going on in their brains when they say things like "X is good"). The empirical claim and the normative commitment to utilitarianism don't seem incompatible. (And the empirical claim is one that seems to be backed up by recent work in moral psychology.)

Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4

Nitpick: Under most current systems of taxation, you choose how much to work, and then lose a certain percentage of your income to taxes. A slave does not have the power to choose how much (or whether) to work. This is generally considered a relevant difference between taxation and slavery.

3Annoyance
See the draft. See also the varied attempts to mandate 'community service' or 'national service' for high school students. One who is not a slave is not necessarily a free man.

One who is not a slave is not necessarily a free man.

Indeed. One may be a woman. Or a turtle.

8CronoDAS
In general, "because that person is a minor" is one of the few remaining justifications for denying someone civil rights that people still consider valid. Try comparing the status of a 15-year-old in the United States today with that of a black man or white woman in the in the United States of 1790 and see if you come up with any interesting similarities.
0Peter_Twieg
So if the slave were allowed to choose his own level of effort, he would no longer be a slave? I think you have a point with what you're saying (and I'm predisposed against believing that the taxation/slavery analogy has meaning), but I don't think being a slave is incompatible with some autonomy.
6CronoDAS
I think we'd better kill this discussion before it turns into an "is it a blegg or rube" debate. - the original anarchist's argument falls into at least one of the fallacies on that page, and I suspect my nitpick might do so as well.

I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.

By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.

3SoullessAutomaton
This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., "reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good" becomes "reading posts on Less Wrong is good, therefore it is a valid end goal in itself". Is that what you're getting at?

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma.

It's not outrageous at all, actually. Affective asynchrony shows that we have independent ratings of goodness and badness, just like LW votes... but on the outside of the system, all that shows is the result of combining them.

That is, we can readily see what someone "votes" for different things in their environment, but not what inputs are being summed. And when we look at ourselves, we expect to find a single "score" on a thing.

The main differe... (read more)

Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be person

... (read more)
0blacktrance
Seconded, and I'm not a native English speaker either, although in my case I think they feel different because of how much I talk about ethics.

I remember coming to the sudden realization that I don't have to sort people into the "good" box or the "bad" box--people are going to have a set of traits and actions which I would divide into both, and therefore the whole person won't fit into either. I don't remember what triggered the epiphany, but I remember that it felt very liberating. I no longer had to be confused or frustrated when someone who usually annoyed me did something I approved of, or someone I liked made a choice I disagreed with.

So, you see, this idea already has a high karma score for me, so I'm upvoting it. ;)

An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad" [...] Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".

To be annoying, "good" does have... (read more)

1thomblake
No, there's isn't. Depending on context, you can use 'righteous' but it doesn't quite mean the same thing. For what it's worth, some ethicists such as myself make no distinction between 'moral' good and 'quality' good - utilitarians (especially economists) basically don't either, most of the time. Sidgwick defines ethics as "the study of what one has most reason to do or want", and that can apply equally well to 'buying good vs. bad chairs' and 'making good vs bad decisions'
5pangloss
This reminds me of a Peter Geach quote: "The moral philosophers known as Objectivists would admit all that I have said as regards the ordinary uses of the terms good and bad; but they allege that there is an essentially different, predicative use of the terms in such utterances as pleasure is good and preferring inclination to duty is bad, and that this use alone is of philosophical importance. The ordinary uses of good and bad are for Objectivists just a complex tangle of ambiguities. I read an article once by an Objectivist exposing these ambiguities and the baneful effects they have on philosophers not forewarned of them. One philosopher who was so misled was Aristotle; Aristotle, indeed, did not talk English, but by a remarkable coincidence ἀγαθός had ambiguities quite parallel to those of good. Such coincidences are, of course, possible; puns are sometimes translatable. But it is also possible that the uses of ἀγαθός and good run parallel because they express one and the same concept; that this is a philosophically important concept, in which Aristotle did well to be interested; and that the apparent dissolution of this concept into a mass of ambiguities results from trying to assimilate it to the concepts expressed by ordinary predicative adjectives."
1PhilGoetz
He knows that. He's pointing out the flaws with that model.
2MrHen
This is from his article. Speaking for myself, when I use the word "good" I use it in several different ways in much the same way I do when I use the word "right".
0Relsqui
I think the point was that we do use the word in multiple ways, but those ways don't feel as different as the separate meanings of "right." The concepts are similar enough that people conflate them. If you never do this, that's awesome, but the post posits that many people do, and I agree with it.

...that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.

Uh, to a Bayesian this would be relevant evidence. Or do I misunderstand something?

3PhilGoetz
A Hitler supporter acting violently is evidence against Hitler. But it takes a lot of them to reach significance.
2MBlume
A single Hitler supporter acting violently isn't much evidence against Hitler. Thousands of apparently sane individuals committing horrors is pretty damning though.
3Paul Crowley
I haven't done the math, but I would have thought that a hundred incidents would be more than a hundred times as much evidence as one, because it says that it's not just the unsurprising lunatic fringe of your supporters who are up for violence.
2Eliezer Yudkowsky
I don't think that's possible, unless the first incident makes it conditionally less likely that the second incident will occur unless Hitler is ungood. Unless you mean, "the total information that a sum of one incident has occurred is less than a hundredth the evidence than the total information that a sum of a hundred incidents have occurred", in which case I agree, because in the former case you're also getting the information on all the people who didn't commit violent acts.
1Paul Crowley
That wasn't what I had in mind (and what I did have in mind is pretty straightforward to express and test mathematically, so I'll do that later today) but it's a possibility worth taking seriously: are you the sort of organisation that responds to reports of violence with a memo saying "don't go carving a backwards B on people"?
1SoullessAutomaton
Assuming the prior probability of politically-motivated violent incidents to be greater than zero, X incidents where X/(number of supporters) is roughly equal to the incidence for the entire population offers very little evidence of anything, so X*100 is trivially more than a hundred times the evidence.
0FlakAttack
I guess the question being asked here is whether those Hitler supporters acting so violently should affect your decision on whether to support Hitler or not. Rationally speaking, it should not, because his supporters and the man himself are two separate things, but the initial response will likely be to assign both things to the same category and have both be affected by the negative perception of the supporters. I think if you use examples that are less confrontational or biased you can get the message across better. Hitler is usually not a useful subject for examples or comparisons.
3mattnewport
To a Bayesian, all evidence is relevant. These two pieces of evidence would seem to have very low weights though. Do you think the weights would be significant?
0cousin_it
If I were a McCain supporter, the rumor's turning out to be false would've carried significant weight for me. You?
4SoullessAutomaton
Assigning significant weight to this event (on either side) is likely a combination of sensationalist national mass media and availability heuristic bias. Uncoordinated behavior of individual followers reflects very weakly on organizations or their leaders. Without any indication of wider trends or mass behavior, the evidence would be weighted so little as to demand disregarding by a bounded rationalist.

Yes, I was being dumb. Sorry.

Edit: stop with the upvotes already!

Yes, I was being dumb. Sorry.

I see you've been upvoted anyways so I'm likely not the only one, but I want to personally thank you for this. People being more willing to admit that they made a mistake and carry on is an excellent feature of Less Wrong and extremely rare in most online communities.

1John_Maxwell
I disagree that it is extremely rare. I've seen a good number of apologies reading reddit, and I think it might be bad to upvote them because it could lead to the motives of any apologizer becoming suspect.
1Eliezer Yudkowsky
Voted up because it asked not to be upvoted.
0rabidchicken
Hey, that's my line.
0loqi
Voted randomly because it references a vote cast on the basis of vote-reference.
-3Paul Crowley
I'll probably get downvoted for this, but please don't upvote this comment. EDIT: OK, looks like that wasn't as funny as I thought, lesson learned!
0whpearson
I left the parent alone.

Excellent post. Upvoted! (Literally.)

5evtujo
Can we rename the vote up and vote down buttons as "yay" and "boo"? Perhaps that can be a profile option... :)
-3Document
Are you generally not literal when you say "upvoted"?
1komponisto
Um, did you miss the following paragraph (emphasis added)?: And...the rest of the post? Upvoting/karma as a metaphor was the whole point! In such a context, it was perfectly sensible (and even, I daresay, slightly witty) of me to append "literally" to the above comment. (Honestly, did I really need to explain this?)
0Document
No and yes, respectively. In my defense, your comment is 64th in New order, so it's not like it was closely juxtaposed with that paragraph.
-4komponisto
That wasn't just some random paragraph; it was the whole freaking point of the post! It introduced a conceit that was continued throughout the whole rest of the article! Before accusing me of hindsight bias (or the illusion of transparency, which is what I think you really meant), you might have noticed this reply, which should have put its parent into context immediately, or so I would have thought.
0[anonymous]
Did notice it and it didn't. Sorry.

Is the usual definition of utilitarianism taken to weight the outcomes for all people equally? While utilitarian arguments often lead to conclusions I agree with, I can't endorse a moral system that seems to say I should be indifferent to a choice between my sister being shot and a serial killer being shot. Is there a standard utilitarian position on such dilemmas?

4gjm
I fear you may be thinking "serial killer: karma -937; my sister: karma +2764". A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those. In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they're labelled "bad".
7mattnewport
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
2gjm
It's certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of "normal" situations; in situations as weirdly abnormal as you'd need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn't trust our intuition.
0mattnewport
And this is the crux of my problem with utilitarianism I guess. I just don't see any good reason to prefer it over my intuition when the two are in conflict.
1randallsquared
Even though your intuition might be wrong in outlying cases, it's still a better use of your resources not to think through every case, so I'd agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people. It's better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of "almost-right", in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.
0mattnewport
My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people's utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.
3Virge
Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection. Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don't (in general) get weightings that will match your intuitions. I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don't know at all. In reality, the money I spend on my two cats could be spent improving the happiness of many humans - humans that I don't know at all who are living a long way away from me. Clearly I don't apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.
0Matt_Simpson
I think you are construing the term "utilitarianism" too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don't have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer's utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.
0mattnewport
Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others' utility functions to respect or because individuals are allowed to choose subjective weights for others' utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.
0Matt_Simpson
Well, assuming you get to make up your own utility function, yes. However, I don't think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.
0mattnewport
Didn't you just suggest that we don't have to value the entirety of a murderer's utility function? There are certainly similarities between individual's utility functions but they are not identical. That still doesn't address the differential weighting issue either. It's fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique 'right' answer in the face of any ethical dilemma and so seems to me to be of limited value.
0Virge
If you choose to reject any system that doesn't provide a "unique 'right' answer" then you're going to reject every system so far devised. Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ? However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we're left with: * Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved. * Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint. You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it'll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
0mattnewport
It seems to me that utilitarianism is trying to answer the wrong question. I don't think there's anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is 'right' but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals. On my view of morality it's accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is 'right'. I haven't, but I've seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it's aimed at someone who still holds the naive view of morality that it's about doing what is 'right'.
1Virge
I think we're in agreement here. For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal "what should I be doing with my time and energy at this moment?" to the public "what should person A be permitted/obliged to do?" I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I've grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent). Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal. I've been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I've been erecting an extreme utilitarian strawman. I think I have, and I'm seeing a glimmer of a solution to the confusion. Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone's emotional satisfaction from relationship activities. (I fee
0Paul Crowley
I have skimmed it and will return to it ASAP. Thank you very much for recommending it!
1Kingreaper
Yes. But if the "serial killer" is actually somone who enjoys helping others, who want to (and won't harm anyone when they), commit suicide; are they really a bad person at all? Is shooting them really better than shooting a random person?
1SoullessAutomaton
Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?
1Kaj_Sotala
There isn't a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details. There are plenty of variations of utilitarianism - negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.
1steven0461
Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you're treating your family members preferentially, you may be maximizing your utility, but you're not following "utilitarianism" in that word's standard meaning.
3Paul Crowley
The SEP agrees with you:
3conchis
I'd put a slight gloss on this. The problem is that that "utilitarianism", as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there's definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights. As much of this literature is in economics, perhaps it's non-standard in philosophy. It does, however, have a fairly long pedigree.
0[anonymous]
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I'm not sure a definition that says "maximize Kim Jong-Il's welfare" is a form of utilitarianism, is a good definition.
0Kaj_Sotala
Utilitarianism that includes animals vs. utilitarianism that doesn't include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn't seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less). Classical utilitarianism is more strictly defined, but as you point out, we're not talking about just classical utilitarianism here.
1conchis
I don't think that's a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it's not that I am supposed to have a special obligation to humans because I'm human; it's that only humans are supposed to matter at all. In any event, the point wasn't that agent relative weights don't make sense, it's that they're not part of a standard definition of utilitarianism, even in a broad sense. I still think that's accurate characterization of professional usage, but if you have specific examples to the contrary, I'd be open to changing my mind. Gratuitous nitpick: humans are animals too.
1Kaj_Sotala
You may be right. But we're inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport's question: You're right, by most interpretations utilitarianism does weigh everybody equally. However, if that's the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
1conchis
I completely agree with your reframing. I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.
0Kaj_Sotala
No problem. It happens.
2MBlume
For just a moment I was thinking "How is the Somebody Else's Problem field involved?"
0AndySimpson
In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.
0Peter_Twieg
Utilitarianism doesn't describe how you should feel, it simply describes "the good". It's very possible that if accepting utilitarianism's implications is so abhorrent to you that the world would be a worse place because you do it (because you're unhappy, or because embracing utilitarianism might actually make you worse at promoting utility), then by all means... don't endorse it, at least not at some given level you find repugnant. This is what Derek Parfit labels a "self-effacing" philosophy, I believe. There are a variety of approaches to actually being a practicing utilitarian, however. Obviously we don't have the computational power required to properly deduce every future consequence of our actions, so at a practical level utilitarians will always support heuristics of some sort. One of these heuristics may dictate that you should always prefer serial killers to be shot over your sister for the kinds of reasons that gjm describes. This might not always lead to the right conclusion from a utilitarian perspective, but it probably wouldn't be a blameworthy one, as you did the best you could under incomplete information about the universe.

"To a Bayesian, this would be balderdash."

Um, not the 'Bayesians' here. There is a distinct failure to acknowledge that not everything is evidence regarding everything else.

If the people here wished to include the behavior of a political candidate's supporter in their evaluation of the candidate, they'd make excuses for doing so. If they wished to exclude it, they would likely pass over it in silence - or, if it were brought up, actively denigrate the idea.

Judging what is and is not evidence is an important task that has been completely ignored here.

8SoullessAutomaton
In the most literal, unbounded application of Bayesian induction, anything within the past light cone of what is being considered counts as "evidence". Clearly, an immense majority of it is all but completely independent of most propositions, but it is still evidence, however slight. Having cleared up that everything is evidence, determining the weight to give any particular piece of evidence is left as an exercise for the reader.

It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of "do utile things!" decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is "Yay utility!"

A non-emotive utilitarianism, or any consequentialist theory, could never go beyond "A implies B." That is, if people do A, the result they will get is B. Without "Yay B!" this is not an argument for doing A.

Am I missing something?

3Leonhart
If I am moved by a should-argument to an x-ism, then "Yay x-ism!" is what being moved by that argument feels like, not an additional part of the argument. Otherwise, aren't you're the tortoise demanding "Yay (Yay X!)!", "Yay (Yay (Yay X!)!)!" and so on?
-1whowhowho
You seem to be assuming, without argument, that emotion is the only motivation for doing anything.
0incogn
I tend to agree with mwengler - value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures. There is a risk we might be disagreeing because we are working with different interpretations of emotion. Imagine a work of fiction involving no sentient beings, not even metaphorically - can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?
-3mwengler
You are right, the only reason I can think for doing anything is because I feel like it, because I want to, which is emotional. In some more detail, think this includes doing things to avoid things I am afraid of or that I find painful, also emotional. Certainly pleasure seeking is emotional. I attribute playing sudoku to my feeling of pleasure of having my mind occupied. If you come up with something like a Kantian categorical imperative, I will tell you I don't follow categorical imperatives because I don't feel like it, and nothing in the real world of "is" seems to break when I act that way. And it does suggest to me that those who do follow a categorical imperative do it because they feel like it, the feeling of logical consistency or superiority appeals to them. Please let me know what OTHER reasons, non-emotional reasons, there are to do something.
-2whowhowho
There's no logical reason why any given entity, human or otherwise, would have to be motivated by emotion. You may be over generalising from the single example of yourself. Also, you would have to believe that highly logical, vulcan-like people are motivated by some emotion they don't show.
3Leonhart
There's a trivial "logical" reason why this could be the case - tautology - if the person you are talking to defines "emotion" as "those mental states which directly motivate behaviour". Which seems like a perfectly good starting place to me. In other words, this conversation will likely go nowhere until you taboo "emotion" so we can know what work that word does for you.
-2whowhowho
It wasn't my initial claim, and I have already pointed that seemingly unemotional people motivate themselves somehow.
[-]Boyi-30

Hi, I really enjoyed your essay. I also enjoyed the first half of the comments. The question it brought me to was: whether or not there is no higher utilty than transformation? I was wondering if I could hear your opinion on this matter.

It seems to me if transformation of external reality is the primer assesment of utility, then humans should ratioanlity question their emotivism based on pratical solutions. But what if the abiilty to transform external reality was not teh primer assesement of utility? Recently I have been immersed in Confucian thinkinng... (read more)