Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking
I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.
Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"
Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.
Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!
Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1
Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a "B" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me and saying "Hey, did you hear about this, this proves we've been right all along!" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.
And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying "Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!" and so on, and the Republicans were trying to bury it as quickly as possible.
The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2
So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.
Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.
Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.
But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.
Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.
In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category "drugs"3, and we've got to call it either "good" or "bad", then "bad" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?
So how do we avoid all of these problems?
I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it.
Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.
When people say "Utilitarianism says slavery is bad" or "Utilitarianism says murder is wrong" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is "In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so" and possibly "and the same would be true of any broadly similar situation".
But why in blue blazes can't we just go ahead and say "slavery is bad"? What could possibly go wrong?
Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4
(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)
Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word "good" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.
I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like "I dunno, the two-state solution or something?". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.
In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.
Footnotes:
1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.
2: More evidence: we tend to like the idea of "good" or "bad" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.
3: Of course, the battle has already been half-lost once you have a category "drugs". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category "drugs", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.
4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.
Uh.....er....maybe!
I'm familiar with Bentham, Mill, Singer, Eliezer, and random snippets of utilitarian theory I picked up here and there. I'm not confident enough with my taxonomy to use quite so many adjectives with confidence. I will add that article to the list of things to read.
I agree that your course sounds awesome. If you hear anything particularly enlightening, please turn it into an LW post.
Seconded.