This very good post! Yay Yvain! You have high karma. Please give me stock advice.
I know a guy who constructed a 10-dimensional metric space for English words, then did PCA on it. There were only 4 significant components: good-bad, calm-exciting, open-closed, basic-elaborate. They accounted for 65%, 20%, 9%, and 5% of the variance in the 10-dimensional space, leaving 1% for everything else. This means that we need only 8 adjectives in English 99% of the time.
So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt.
This could be explored more deeply in another post.
Of course, the battle has already been half-lost once you have a category "drugs".
Especially since the category itself is determined by governmental fiat. I once saw an ad for employment at Philip Morris with a footnote to the effect that Philip Morris is a "drug-free workplace". I'm sure they've plenty of nicotine and caffeine there, they're simply using 'drugs' to mean "things to which the federal government has already said 'boo'"
pizza is good, seafood is bad
When I say something is good or bad ("yay doggies!") it's usually a kind of shorthand:
pizza is good == pizza tastes good and is fun to make and share
seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once
yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.
I suspect when most people use the words 'good' and 'bad' they are using just this kind of linguistic compression. Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I'm not sure what you want me to conclude.
I'm taking an entire course called "Weird Forms of Consequentialism", so please clarify - when you say "utilitarianism", do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?
It is one. I am taking it for its awesomeness in spite of the professor being a mean person who considered it appropriate to schedule the class from seven to nine-thirty in the evening. (His scheduling decision and his meanness are separate qualities.)
The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.
It would be Bayesian evidence of the right sign. But its magnitude would be vanishingly tiny.
Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.
Upvoted because of this bit. It's obvious in retrospect, but I hadn't made the connection between the two concepts previously.
We sometimes do it the opposite way on LW: We'll upvote something that we wouldn't normally if it has < 0 points, because we're seeking its appropriate level rather than just voting.
I don't know that anyone downvotes something because they think it's too popular. I have refrained from voting for things that I thought had enough upvotes.
I saw a presentation where someone took thousands of English words, placed them in a high-dimensional space based on, I think, what other words they co-occurred with, ran PCA on this space, and analyzed the top 4 resulting dimensions. The top dimension was "good/bad".
How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma.
Clearly this means I'm better at philosophy than Eliezer (2009). But to be serious, this reminds me how I need to value the karma scores of articles differently according to when they where made. The effects are pretty big. Completely irrelevant discussion threads now routinely get over 20 votes, while some insightful old writing hovers around 10 or 15.
"If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it."
It wouldn't be contradictory for someone to assign high utility to the presence of drugs and low utility to their absence. What you really mean is upon reflection, most people would not do this.
Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.
"Should I eat this apple?" Becomes simply "how do I feel about eating this apple" (or otherwise it's simply meaningless). But really there are considerations that go into the answer other than mere feelings (for example, is the apple poisonous?).
Because utilitarianism has a theory of right action and a theory of value, I don't think it's compatible with emotivism. But I haven't read much in the literature detailing this particular question, as I don't read much currently about utilitarianism.
Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us.
Don't blame the unconscious. It only makes up explanations when you ask for them.
My first lesson in this was when I was 17 years old, at my first programming job in the USA. I hadn't been working there very long, maybe only a week or two, and I said something or other that I hadn't thought through -- essentially making up an explanation.
The boss reprimanded me, and told me of something he called "Counter man syndrome", wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their own bullshit.
From then on, I never believed my own made-up explanations... at least not in the field of computers. Instead, I considered them as hypotheses.
So, it's not only a learnable skill, it can be learned quickly, at least by 17 year-old. ;-)
Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4
Nitpick: Under most current systems of taxation, you choose how much to work, and then lose a certain percentage of your income to taxes. A slave does not have the power to choose how much (or whether) to work. This is generally considered a relevant difference between taxation and slavery.
One who is not a slave is not necessarily a free man.
Indeed. One may be a woman. Or a turtle.
I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.
To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma.
It's not outrageous at all, actually. Affective asynchrony shows that we have independent ratings of goodness and badness, just like LW votes... but on the outside of the system, all that shows is the result of combining them.
That is, we can readily see what someone "votes" for different things in their environment, but not what inputs are being summed. And when we look at ourselves, we expect to find a single "score" on a thing.
The main differe...
...Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be person
I remember coming to the sudden realization that I don't have to sort people into the "good" box or the "bad" box--people are going to have a set of traits and actions which I would divide into both, and therefore the whole person won't fit into either. I don't remember what triggered the epiphany, but I remember that it felt very liberating. I no longer had to be confused or frustrated when someone who usually annoyed me did something I approved of, or someone I liked made a choice I disagreed with.
So, you see, this idea already has a high karma score for me, so I'm upvoting it. ;)
An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad" [...] Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".
To be annoying, "good" does have...
...that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.
Uh, to a Bayesian this would be relevant evidence. Or do I misunderstand something?
Yes, I was being dumb. Sorry.
I see you've been upvoted anyways so I'm likely not the only one, but I want to personally thank you for this. People being more willing to admit that they made a mistake and carry on is an excellent feature of Less Wrong and extremely rare in most online communities.
Is the usual definition of utilitarianism taken to weight the outcomes for all people equally? While utilitarian arguments often lead to conclusions I agree with, I can't endorse a moral system that seems to say I should be indifferent to a choice between my sister being shot and a serial killer being shot. Is there a standard utilitarian position on such dilemmas?
"To a Bayesian, this would be balderdash."
Um, not the 'Bayesians' here. There is a distinct failure to acknowledge that not everything is evidence regarding everything else.
If the people here wished to include the behavior of a political candidate's supporter in their evaluation of the candidate, they'd make excuses for doing so. If they wished to exclude it, they would likely pass over it in silence - or, if it were brought up, actively denigrate the idea.
Judging what is and is not evidence is an important task that has been completely ignored here.
It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of "do utile things!" decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is "Yay utility!"
A non-emotive utilitarianism, or any consequentialist theory, could never go beyond "A implies B." That is, if people do A, the result they will get is B. Without "Yay B!" this is not an argument for doing A.
Am I missing something?
Hi, I really enjoyed your essay. I also enjoyed the first half of the comments. The question it brought me to was: whether or not there is no higher utilty than transformation? I was wondering if I could hear your opinion on this matter.
It seems to me if transformation of external reality is the primer assesment of utility, then humans should ratioanlity question their emotivism based on pratical solutions. But what if the abiilty to transform external reality was not teh primer assesement of utility? Recently I have been immersed in Confucian thinkinng...
Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking
I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.
Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"
Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.
Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!
Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1
Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a "B" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me and saying "Hey, did you hear about this, this proves we've been right all along!" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.
And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying "Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!" and so on, and the Republicans were trying to bury it as quickly as possible.
The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2
So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.
Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.
Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.
But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.
Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.
In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category "drugs"3, and we've got to call it either "good" or "bad", then "bad" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?
So how do we avoid all of these problems?
I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it.
Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.
When people say "Utilitarianism says slavery is bad" or "Utilitarianism says murder is wrong" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is "In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so" and possibly "and the same would be true of any broadly similar situation".
But why in blue blazes can't we just go ahead and say "slavery is bad"? What could possibly go wrong?
Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4
(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)
Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word "good" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.
I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like "I dunno, the two-state solution or something?". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.
In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.
Footnotes:
1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.
2: More evidence: we tend to like the idea of "good" or "bad" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.
3: Of course, the battle has already been half-lost once you have a category "drugs". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category "drugs", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.
4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.