Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Trouble With "Good"

83 Post author: Yvain 17 April 2009 02:07AM

Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking

I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.

Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"

Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.

Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!

Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1

Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a "B" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me  and saying "Hey, did you hear about this, this proves we've been right all along!" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.

And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying "Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!" and so on, and the Republicans were trying to bury it as quickly as possible.

The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2

So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.

Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.

But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.

Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.

In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category "drugs"3, and we've got to call it either "good" or "bad", then "bad" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?

So how do we avoid all of these problems?

I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it.

Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.

When people say "Utilitarianism says slavery is bad" or "Utilitarianism says murder is wrong" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is "In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so" and possibly "and the same would be true of any broadly similar situation".

But why in blue blazes can't we just go ahead and say "slavery is bad"? What could possibly go wrong?

Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4

(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)

Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word "good" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.

I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like "I dunno, the two-state solution or something?". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.

In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.

Footnotes:

1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.

2: More evidence: we tend to like the idea of "good" or "bad" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.

3: Of course, the battle has already been half-lost once you have a category "drugs". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category "drugs", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.

4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.

Comments (132)

Comment author: PhilGoetz 17 April 2009 05:38:35PM *  21 points [-]

This very good post! Yay Yvain! You have high karma. Please give me stock advice.

I know a guy who constructed a 10-dimensional metric space for English words, then did PCA on it. There were only 4 significant components: good-bad, calm-exciting, open-closed, basic-elaborate. They accounted for 65%, 20%, 9%, and 5% of the variance in the 10-dimensional space, leaving 1% for everything else. This means that we need only 8 adjectives in English 99% of the time.

So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt.

This could be explored more deeply in another post.

Comment author: Yvain 02 May 2009 10:28:42PM 8 points [-]

Sorry, I didn't see this until today.

Can you give me a link to some more formal description of this? I don't understand how you would use a ten dimensional metric space to capture English words without reducing them to a few broad variables, which seems to be what he's claiming as a result.

Comment author: hylleddin 05 December 2013 05:51:43AM 2 points [-]

This is a long time after the fact, but I found this.

Comment author: jmmcd 09 November 2010 07:22:31AM 7 points [-]

This means that we need only 8 adjectives in English 99% of the time.

Awesome

Comment author: Peter_de_Blanc 17 April 2009 09:56:32PM *  1 point [-]

Are you talking about Alexei Samsonovich? I saw a very similar experiment that he did.

Comment author: nazgulnarsil 17 April 2009 08:38:17PM 0 points [-]

I agree that it could use more exploration. I suspect that many of our biases stem from simple preference ranking errors.

Comment author: FlakAttack 19 April 2009 06:09:25AM 1 point [-]

I'm pretty sure I actually saw this in a philosophy textbook, which would mean there are likely observations or studies on the subject.

Comment author: andrewc 17 April 2009 04:00:25AM *  9 points [-]

pizza is good, seafood is bad

When I say something is good or bad ("yay doggies!") it's usually a kind of shorthand:

pizza is good == pizza tastes good and is fun to make and share

seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once

yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.

I suspect when most people use the words 'good' and 'bad' they are using just this kind of linguistic compression. Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I'm not sure what you want me to conclude.

Comment author: jimrandomh 17 April 2009 04:33:40AM 4 points [-]

Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it?

Exactly that. We may be able to recall our reasoning if we try to, but we're likely to throw in a few extra false justifications on top, and to forget about the other side.

Comment author: andrewc 17 April 2009 06:23:24AM 2 points [-]

OK, 'compression' is the wrong analogy as it implies that we don't lose any information. I'm not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the 'proof' into question.

Comment author: ciphergoth 17 April 2009 01:23:47PM 6 points [-]

This drives me crazy when it happens to me.

  • Someone: "Shall we invite X?"
  • Me: "No, X is bad news. I can't remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X."
Comment author: arthurlewis 18 April 2009 12:09:43AM 6 points [-]

Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.

Hmm, I guess that's why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.

Compression seems a fine analogy to me, as long as we're talking about mp3's and flv's, rather than zip's and tar's.

Comment author: whpearson 18 April 2009 09:40:06PM 1 point [-]

I think of it as memoisation rather than compression.

Comment author: AndySimpson 17 April 2009 11:39:24AM 0 points [-]

I'm not sure this is always a bad thing.

It may be useful shorthand to say "X is good", but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement "Bayes' Theorem is valid, true, and useful in updating probabilities" collapses into "Bayes' Theorem is good," we invite the abuse of Bayes' Theorem.

So I wouldn't say it's always a bad thing, but I'd say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.

Comment author: janos 17 April 2009 02:22:00PM 2 points [-]

Do you have some good examples of abuse of Bayes' theorem?

Comment author: AndySimpson 17 April 2009 03:15:32PM 0 points [-]

That is a good question for a statistician, and I am not a statistician.

One thing that leaps to mind, however, is two-boxing on Newcomb's Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don't begin to understand suggests that either response to Newcomb's problem is defensible using Bayesian nets.

There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.

Also, it's struck me that a frequentist statistician might call most Bayesian uses of the theorem "abuses."

I'm not sure those are really good examples, but I hope they're satisfying.

Comment author: SoullessAutomaton 17 April 2009 12:11:39PM 0 points [-]

Exactly that. We may be able to recall our reasoning if we try to, but we're likely to throw in a few extra false justifications on top, and to forget about the other side.

I suspect it's more likely that we won't remember it at all; we'd simply increase the association between the thing and goodness and, if looking for a reason, will rationalize one on the spot. Our minds are very good at coming up with explanations but not good at remembering details.

Of course, if your values and knowledge haven't changed significantly, you'll likely confabulate something very similar to the original reasoning; but as the distance increases between the points of decision and rationalization, the accuracy is likely to drop.

Comment author: wallowinmaya 05 June 2011 05:52:14PM *  12 points [-]

Eliezer Yudkowsky currently has 2486 karma.

Ah, the good old days!

Comment author: MBlume 17 April 2009 02:46:41AM 20 points [-]

Of course, the battle has already been half-lost once you have a category "drugs".

Especially since the category itself is determined by governmental fiat. I once saw an ad for employment at Philip Morris with a footnote to the effect that Philip Morris is a "drug-free workplace". I'm sure they've plenty of nicotine and caffeine there, they're simply using 'drugs' to mean "things to which the federal government has already said 'boo'"

Comment author: [deleted] 12 January 2014 09:35:47AM 4 points [-]

The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.

It would be Bayesian evidence of the right sign. But its magnitude would be vanishingly tiny.

Comment author: AshwinV 13 January 2015 11:32:07AM 0 points [-]

Considering how many ways either outcome would result, im not really sure how P(supporter carves a B |obama is evil) would actually measure out

Comment author: John_Maxwell_IV 16 August 2010 12:09:43AM 4 points [-]

"Considered harmful" considered harmful.

Comment author: John_Maxwell_IV 16 August 2010 12:12:22AM 3 points [-]

"If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it."

It wouldn't be contradictory for someone to assign high utility to the presence of drugs and low utility to their absence. What you really mean is upon reflection, most people would not do this.

Comment author: Alicorn 17 April 2009 02:41:04AM *  10 points [-]

I'm taking an entire course called "Weird Forms of Consequentialism", so please clarify - when you say "utilitarianism", do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?

Comment author: Yvain 17 April 2009 03:34:12AM *  7 points [-]

Uh.....er....maybe!

I'm familiar with Bentham, Mill, Singer, Eliezer, and random snippets of utilitarian theory I picked up here and there. I'm not confident enough with my taxonomy to use quite so many adjectives with confidence. I will add that article to the list of things to read.

I agree that your course sounds awesome. If you hear anything particularly enlightening, please turn it into an LW post.

Comment author: Alicorn 17 April 2009 03:51:48AM *  5 points [-]

I may well do that! Thank you for asking.

Edit: I just made a post linking something called two-tier consequentialism to a post of Eliezer's.

Comment author: Eliezer_Yudkowsky 17 April 2009 03:15:15AM 3 points [-]

This sounds like an awesome course.

Comment author: Alicorn 17 April 2009 03:53:34AM 10 points [-]

It is one. I am taking it for its awesomeness in spite of the professor being a mean person who considered it appropriate to schedule the class from seven to nine-thirty in the evening. (His scheduling decision and his meanness are separate qualities.)

Comment author: [deleted] 19 January 2012 08:11:34PM 2 points [-]

How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma.

Clearly this means I'm better at philosophy than Eliezer (2009). But to be serious, this reminds me how I need to value the karma scores of articles differently according to when they where made. The effects are pretty big. Completely irrelevant discussion threads now routinely get over 20 votes, while some insightful old writing hovers around 10 or 15.

Comment author: SirBacon 17 April 2009 05:03:33AM 2 points [-]

I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.

By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.

Comment author: SoullessAutomaton 17 April 2009 11:45:37AM 2 points [-]

I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.

This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., "reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good" becomes "reading posts on Less Wrong is good, therefore it is a valid end goal in itself". Is that what you're getting at?

Comment author: Kaj_Sotala 17 April 2009 10:50:10AM 5 points [-]

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.

Upvoted because of this bit. It's obvious in retrospect, but I hadn't made the connection between the two concepts previously.

Comment author: PhilGoetz 17 April 2009 05:40:22PM 8 points [-]

We sometimes do it the opposite way on LW: We'll upvote something that we wouldn't normally if it has < 0 points, because we're seeking its appropriate level rather than just voting.

I don't know that anyone downvotes something because they think it's too popular. I have refrained from voting for things that I thought had enough upvotes.

Comment author: randallsquared 17 April 2009 09:17:17PM 1 point [-]

I don't think I have on LW, but on reddit I have downvoting things that seemed too popular, though I technically agreed with them, so it does happen.

Comment author: thomblake 17 April 2009 02:14:55AM *  3 points [-]

Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.

"Should I eat this apple?" Becomes simply "how do I feel about eating this apple" (or otherwise it's simply meaningless). But really there are considerations that go into the answer other than mere feelings (for example, is the apple poisonous?).

Because utilitarianism has a theory of right action and a theory of value, I don't think it's compatible with emotivism. But I haven't read much in the literature detailing this particular question, as I don't read much currently about utilitarianism.

Comment author: Yvain 17 April 2009 02:25:08AM *  9 points [-]

Well, what's interesting about that comment is that our beliefs about our own justifications and actions are usually educated guesses and not privileged knowledge. Or consider Eliezer's post about the guy who said he didn't respect Eliezer's ideas because Eliezer didn't have a Ph.D, and then when Eliezer found a Ph.D who agreed with him, the guy didn't believe him either.

My guess would be that we see the apple is poisonous and "downvote" it heavily. Then someone asks what we think of the apple, we note the downvotes, and we say it's bad. Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us. Which is probably that it's poisonous.

See also: footnote 1

Comment author: pjeby 17 April 2009 02:41:09AM 16 points [-]

Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us.

Don't blame the unconscious. It only makes up explanations when you ask for them.

My first lesson in this was when I was 17 years old, at my first programming job in the USA. I hadn't been working there very long, maybe only a week or two, and I said something or other that I hadn't thought through -- essentially making up an explanation.

The boss reprimanded me, and told me of something he called "Counter man syndrome", wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their own bullshit.

From then on, I never believed my own made-up explanations... at least not in the field of computers. Instead, I considered them as hypotheses.

So, it's not only a learnable skill, it can be learned quickly, at least by 17 year-old. ;-)

Comment author: vizikahn 17 April 2009 10:00:59AM 3 points [-]

When I had a job behind a counter, one of the rules was: "We don't sell 'I don't know'". We were encouraged to look things up as hard as possible, but it's easy to see how this turns into making things up. I'm going to use the term Counter man syndrome from now on.

Comment author: Yvain 17 April 2009 03:31:28AM 0 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.

Comment author: pjeby 17 April 2009 06:16:37AM 3 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

I'm pointing out that there is actually no difference between the two. Your "explainer" (I call it the Speculator, myself), just makes stuff up with no concern for the truth. All it cares about are plausibility and good self-image reflection.

I don't see the Speculator as entirely unconscious, though. In fact, most of us tend to identify with the Speculator, and view its thoughts as our own. Or I suppose, you might say that the Speculator is an tool that we can choose to think with... and we tend to reach for it by default.

I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.

Sometimes I refer to the other-than-conscious, or to non-conscious processes. But finer distinctions are useful at times, so I also refer to the Savant (non-verbal, sensory-oriented, single-stepping, abstraction/negation-free) and the Speculator (verbal, projecting, abstracting, etc.)

I suppose it's open to question whether the Speculator is really "other-than-conscious", in that it sounds like a conscious entity, and we consciously tend to identify with it, in the absence of e.g. meditative or contemplative training.

Comment author: SoullessAutomaton 17 April 2009 11:51:09AM 0 points [-]

I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.

What makes you think the mental systems to construct either explanation would be different? Especially given the research showing that we have dedicated mental systems devoted to rationalizing observed events.

Comment author: orthonormal 17 April 2009 04:45:03PM 3 points [-]

Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.

Right. I think that most people hold the belief that their system of valuations is internally consistent (i.e. that you can't have two different descriptions of the same thing that are both complete, accurate, and assign different valences), which requires them (in theory) to confront moral arguments.

I think of basic moral valuations as being one other facet of human perception: the complicated process by which we interpret sensory data to get a mental representation of objects, persons, actions, etc. It seems that one of the things our mental representation often includes is a little XML tag indicating moral valuation.

The general problem is that these don't generally form a coherent system, which is why intelligent people throughout the ages have been trying to convince themselves to bite certain bullets. Your conscious idea of what consistent moral landscape lies behind these bits of 'data' inevitably conflicts with your immediate reactions at some point.

Comment author: conchis 17 April 2009 05:13:43PM *  1 point [-]

I may be misintepreting, but I wonder whether Yvain's use of the word "emotivism" here is leading people astray. He doesn't seem to be committing himself to emotivism as a metaethical theory of what it means to say something is good, as much as an empirical claim about most people's moral psychology (that is, what's going on in their brains when they say things like "X is good"). The empirical claim and the normative commitment to utilitarianism don't seem incompatible. (And the empirical claim is one that seems to be backed up by recent work in moral psychology.)

Comment author: PhilGoetz 08 November 2010 10:08:55PM 3 points [-]

I saw a presentation where someone took thousands of English words, placed them in a high-dimensional space based on, I think, what other words they co-occurred with, ran PCA on this space, and analyzed the top 4 resulting dimensions. The top dimension was "good/bad".

Comment author: jmmcd 09 November 2010 07:25:36AM 5 points [-]

You already said so in this very thread :)

Comment author: Unnamed 08 November 2010 10:31:49PM 1 point [-]

This sounds like semantic differential research. The standard finding is three dimensions: good-bad (evaluation), strong-weak (potency), and active-passive (activity).

Comment author: AdeleneDawner 08 November 2010 10:09:45PM 1 point [-]

Do you remember the other three?

Comment author: MaxNanasy 10 December 2016 05:43:11AM 0 points [-]

See here

Comment author: pjeby 17 April 2009 02:46:28AM 2 points [-]

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma.

It's not outrageous at all, actually. Affective asynchrony shows that we have independent ratings of goodness and badness, just like LW votes... but on the outside of the system, all that shows is the result of combining them.

That is, we can readily see what someone "votes" for different things in their environment, but not what inputs are being summed. And when we look at ourselves, we expect to find a single "score" on a thing.

The main difference is that in humans, upvotes and downvotes don't count the same, and a sufficiently high imbalance between the two can squelch the losing direction. On the other hand, a close match between the two results in "mixed feelings" and a low probability of acting, even if the idea really is a good one.

And good decision-making in humans requires (at minimum) examining the rationale behind any downvotes, and throwing out the irrational ones.

Comment author: Annoyance 17 April 2009 04:59:26PM 2 points [-]

"To a Bayesian, this would be balderdash."

Um, not the 'Bayesians' here. There is a distinct failure to acknowledge that not everything is evidence regarding everything else.

If the people here wished to include the behavior of a political candidate's supporter in their evaluation of the candidate, they'd make excuses for doing so. If they wished to exclude it, they would likely pass over it in silence - or, if it were brought up, actively denigrate the idea.

Judging what is and is not evidence is an important task that has been completely ignored here.

Comment author: SoullessAutomaton 17 April 2009 05:19:02PM 7 points [-]

Judging what is and is not evidence is an important task that has been completely ignored here.

In the most literal, unbounded application of Bayesian induction, anything within the past light cone of what is being considered counts as "evidence". Clearly, an immense majority of it is all but completely independent of most propositions, but it is still evidence, however slight.

Having cleared up that everything is evidence, determining the weight to give any particular piece of evidence is left as an exercise for the reader.

Comment author: CronoDAS 17 April 2009 07:30:45AM 2 points [-]

Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4

Nitpick: Under most current systems of taxation, you choose how much to work, and then lose a certain percentage of your income to taxes. A slave does not have the power to choose how much (or whether) to work. This is generally considered a relevant difference between taxation and slavery.

Comment author: Annoyance 17 April 2009 05:04:04PM 3 points [-]

See the draft. See also the varied attempts to mandate 'community service' or 'national service' for high school students.

One who is not a slave is not necessarily a free man.

Comment author: CronoDAS 17 April 2009 07:49:36PM 7 points [-]

In general, "because that person is a minor" is one of the few remaining justifications for denying someone civil rights that people still consider valid. Try comparing the status of a 15-year-old in the United States today with that of a black man or white woman in the in the United States of 1790 and see if you come up with any interesting similarities.

Comment author: conchis 17 April 2009 05:16:15PM *  12 points [-]

One who is not a slave is not necessarily a free man.

Indeed. One may be a woman. Or a turtle.

Comment author: Peter_Twieg 17 April 2009 02:43:18PM 0 points [-]

So if the slave were allowed to choose his own level of effort, he would no longer be a slave?

I think you have a point with what you're saying (and I'm predisposed against believing that the taxation/slavery analogy has meaning), but I don't think being a slave is incompatible with some autonomy.

Comment author: CronoDAS 17 April 2009 07:42:21PM *  4 points [-]

I think we'd better kill this discussion before it turns into an "is it a blegg or rube" debate. - the original anarchist's argument falls into at least one of the fallacies on that page, and I suspect my nitpick might do so as well.

Comment author: komponisto 17 April 2009 06:05:42AM 1 point [-]

Excellent post. Upvoted! (Literally.)

Comment author: evtujo 19 April 2009 02:55:29AM 4 points [-]

Can we rename the vote up and vote down buttons as "yay" and "boo"? Perhaps that can be a profile option... :)

Comment author: Document 11 April 2010 09:41:31AM -2 points [-]

Are you generally not literal when you say "upvoted"?

Comment author: komponisto 11 April 2010 06:05:37PM 2 points [-]

Um, did you miss the following paragraph (emphasis added)?:

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

And...the rest of the post? Upvoting/karma as a metaphor was the whole point! In such a context, it was perfectly sensible (and even, I daresay, slightly witty) of me to append "literally" to the above comment.

(Honestly, did I really need to explain this?)

Comment author: Document 11 April 2010 07:16:01PM *  0 points [-]

No and yes, respectively. In my defense, your comment is 64th in New order, so it's not like it was closely juxtaposed with that paragraph.

Comment author: komponisto 11 April 2010 08:44:41PM -2 points [-]

That wasn't just some random paragraph; it was the whole freaking point of the post! It introduced a conceit that was continued throughout the whole rest of the article!

Before accusing me of hindsight bias (or the illusion of transparency, which is what I think you really meant), you might have noticed this reply, which should have put its parent into context immediately, or so I would have thought.

Comment author: [deleted] 12 January 2014 09:32:05AM 1 point [-]

Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".

They do feel like different meanings to me.

Granted, I'm not a native English speaker, but my native language also uses the same word for many (though not all) of those. (For books and movies we'd use the words for ‘beautiful’ and ‘ugly’ instead. And for weather we use ‘beautiful’ and ‘bad’.)

Comment author: blacktrance 13 January 2014 08:23:17PM 0 points [-]

Seconded, and I'm not a native English speaker either, although in my case I think they feel different because of how much I talk about ethics.

Comment author: Relsqui 04 October 2010 03:20:19AM 1 point [-]

I remember coming to the sudden realization that I don't have to sort people into the "good" box or the "bad" box--people are going to have a set of traits and actions which I would divide into both, and therefore the whole person won't fit into either. I don't remember what triggered the epiphany, but I remember that it felt very liberating. I no longer had to be confused or frustrated when someone who usually annoyed me did something I approved of, or someone I liked made a choice I disagreed with.

So, you see, this idea already has a high karma score for me, so I'm upvoting it. ;)

Comment author: cousin_it 17 April 2009 07:57:26AM 1 point [-]

...that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.

Uh, to a Bayesian this would be relevant evidence. Or do I misunderstand something?

Comment author: mattnewport 17 April 2009 08:04:51AM 3 points [-]

To a Bayesian, all evidence is relevant. These two pieces of evidence would seem to have very low weights though. Do you think the weights would be significant?

Comment author: cousin_it 17 April 2009 10:03:26AM 0 points [-]

If I were a McCain supporter, the rumor's turning out to be false would've carried significant weight for me. You?

Comment author: SoullessAutomaton 17 April 2009 12:02:07PM 5 points [-]

Assigning significant weight to this event (on either side) is likely a combination of sensationalist national mass media and availability heuristic bias.

Uncoordinated behavior of individual followers reflects very weakly on organizations or their leaders. Without any indication of wider trends or mass behavior, the evidence would be weighted so little as to demand disregarding by a bounded rationalist.

Comment author: cousin_it 17 April 2009 04:10:03PM *  13 points [-]

Yes, I was being dumb. Sorry.

Edit: stop with the upvotes already!

Comment author: SoullessAutomaton 17 April 2009 05:52:04PM 10 points [-]

Yes, I was being dumb. Sorry.

I see you've been upvoted anyways so I'm likely not the only one, but I want to personally thank you for this. People being more willing to admit that they made a mistake and carry on is an excellent feature of Less Wrong and extremely rare in most online communities.

Comment author: John_Maxwell_IV 20 April 2009 05:13:20PM 2 points [-]

I disagree that it is extremely rare. I've seen a good number of apologies reading reddit, and I think it might be bad to upvote them because it could lead to the motives of any apologizer becoming suspect.

Comment author: Eliezer_Yudkowsky 18 April 2009 05:04:26AM 1 point [-]

Voted up because it asked not to be upvoted.

Comment author: rabidchicken 17 August 2010 06:49:08AM 0 points [-]

Hey, that's my line.

Comment author: loqi 18 April 2009 08:40:24PM 0 points [-]

Voted randomly because it references a vote cast on the basis of vote-reference.

Comment author: PhilGoetz 17 April 2009 05:41:46PM *  3 points [-]

A Hitler supporter acting violently is evidence against Hitler. But it takes a lot of them to reach significance.

Comment author: MBlume 17 April 2009 05:44:21PM 1 point [-]

A single Hitler supporter acting violently isn't much evidence against Hitler. Thousands of apparently sane individuals committing horrors is pretty damning though.

Comment author: ciphergoth 17 April 2009 06:01:50PM 2 points [-]

I haven't done the math, but I would have thought that a hundred incidents would be more than a hundred times as much evidence as one, because it says that it's not just the unsurprising lunatic fringe of your supporters who are up for violence.

Comment author: Eliezer_Yudkowsky 18 April 2009 05:06:47AM 2 points [-]

I don't think that's possible, unless the first incident makes it conditionally less likely that the second incident will occur unless Hitler is ungood.

Unless you mean, "the total information that a sum of one incident has occurred is less than a hundredth the evidence than the total information that a sum of a hundred incidents have occurred", in which case I agree, because in the former case you're also getting the information on all the people who didn't commit violent acts.

Comment author: ciphergoth 18 April 2009 11:04:58AM 1 point [-]

unless the first incident makes it conditionally less likely that the second incident will occur unless Hitler is ungood.

That wasn't what I had in mind (and what I did have in mind is pretty straightforward to express and test mathematically, so I'll do that later today) but it's a possibility worth taking seriously: are you the sort of organisation that responds to reports of violence with a memo saying "don't go carving a backwards B on people"?

Comment author: SoullessAutomaton 18 April 2009 10:01:07AM 1 point [-]

Assuming the prior probability of politically-motivated violent incidents to be greater than zero, X incidents where X/(number of supporters) is roughly equal to the incidence for the entire population offers very little evidence of anything, so X*100 is trivially more than a hundred times the evidence.

Comment author: FlakAttack 19 April 2009 06:32:59AM 0 points [-]

I guess the question being asked here is whether those Hitler supporters acting so violently should affect your decision on whether to support Hitler or not. Rationally speaking, it should not, because his supporters and the man himself are two separate things, but the initial response will likely be to assign both things to the same category and have both be affected by the negative perception of the supporters.

I think if you use examples that are less confrontational or biased you can get the message across better. Hitler is usually not a useful subject for examples or comparisons.

Comment author: mwengler 12 March 2013 03:15:01PM -1 points [-]

It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of "do utile things!" decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is "Yay utility!"

A non-emotive utilitarianism, or any consequentialist theory, could never go beyond "A implies B." That is, if people do A, the result they will get is B. Without "Yay B!" this is not an argument for doing A.

Am I missing something?

Comment author: Leonhart 12 March 2013 04:46:12PM 3 points [-]

If I am moved by a should-argument to an x-ism, then "Yay x-ism!" is what being moved by that argument feels like, not an additional part of the argument.

Otherwise, aren't you're the tortoise demanding "Yay (Yay X!)!", "Yay (Yay (Yay X!)!)!" and so on?

Comment author: whowhowho 12 March 2013 03:31:16PM 0 points [-]

You seem to be assuming, without argument, that emotion is the only motivation for doing anything.

Comment author: incogn 12 March 2013 04:11:15PM 0 points [-]

I tend to agree with mwengler - value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.

There is a risk we might be disagreeing because we are working with different interpretations of emotion.

Imagine a work of fiction involving no sentient beings, not even metaphorically - can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?

Comment author: mwengler 12 March 2013 03:44:35PM -2 points [-]

You are right, the only reason I can think for doing anything is because I feel like it, because I want to, which is emotional. In some more detail, think this includes doing things to avoid things I am afraid of or that I find painful, also emotional. Certainly pleasure seeking is emotional. I attribute playing sudoku to my feeling of pleasure of having my mind occupied.

If you come up with something like a Kantian categorical imperative, I will tell you I don't follow categorical imperatives because I don't feel like it, and nothing in the real world of "is" seems to break when I act that way. And it does suggest to me that those who do follow a categorical imperative do it because they feel like it, the feeling of logical consistency or superiority appeals to them.

Please let me know what OTHER reasons, non-emotional reasons, there are to do something.

Comment author: whowhowho 12 March 2013 04:50:50PM -1 points [-]

There's no logical reason why any given entity, human or otherwise, would have to be motivated by emotion. You may be over generalising from the single example of yourself. Also, you would have to believe that highly logical, vulcan-like people are motivated by some emotion they don't show.

Comment author: Leonhart 12 March 2013 05:01:13PM 2 points [-]

There's no logical reason why any given entity, human or otherwise, would have to be motivated by emotion.

There's a trivial "logical" reason why this could be the case - tautology - if the person you are talking to defines "emotion" as "those mental states which directly motivate behaviour". Which seems like a perfectly good starting place to me.

In other words, this conversation will likely go nowhere until you taboo "emotion" so we can know what work that word does for you.

Comment author: whowhowho 12 March 2013 05:04:45PM -1 points [-]

It wasn't my initial claim, and I have already pointed that seemingly unemotional people motivate themselves somehow.

Comment author: MrHen 17 April 2009 07:15:33PM *  0 points [-]

An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad" [...] Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".

To be annoying, "good" does have different uses. The opposite of moral "good" is "evil" the opposite of quality "good" is "poor" and the opposite of correctness "good" is "incorrect". These opposites can all use the word "bad" but they mean completely different things.

If I say murder is bad I mean murder is evil.
If I say that pizza is bad I mean that pizza is of poor quality.
If I say a result was bad I mean that the result was incorrect.

I can not remember if there is a word that splits the moral "good" and quality "good" apart.

This has nothing to do with the majority of your post or the points made another than to say that "Boo, murder!" means something different than "Boo, that pizza!" Trying to lump them all together is certainly plausible, but I think the distinctions are useful. If they only happen to be useful in a framework built on emotivism, fair enough.

Comment author: PhilGoetz 17 April 2009 07:31:19PM 1 point [-]

To be annoying, but "good" does have different uses. The opposite of moral "good" is "evil" the opposite of quality "good" is "poor" and the opposite of correctness "good" is "incorrect". These opposites can all use the word "bad" but they mean completely different things.

He knows that. He's pointing out the flaws with that model.

Comment author: MrHen 17 April 2009 09:32:47PM 1 point [-]

But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing.

This is from his article. Speaking for myself, when I use the word "good" I use it in several different ways in much the same way I do when I use the word "right".

Comment author: Relsqui 04 October 2010 03:14:35AM 0 points [-]

I think the point was that we do use the word in multiple ways, but those ways don't feel as different as the separate meanings of "right." The concepts are similar enough that people conflate them. If you never do this, that's awesome, but the post posits that many people do, and I agree with it.

Comment author: thomblake 18 April 2009 07:29:47PM 0 points [-]

I can not remember if there is a word that splits the moral "good" and quality "good" apart.

No, there's isn't. Depending on context, you can use 'righteous' but it doesn't quite mean the same thing.

For what it's worth, some ethicists such as myself make no distinction between 'moral' good and 'quality' good - utilitarians (especially economists) basically don't either, most of the time. Sidgwick defines ethics as "the study of what one has most reason to do or want", and that can apply equally well to 'buying good vs. bad chairs' and 'making good vs bad decisions'

Comment author: pangloss 21 April 2009 05:55:50AM 4 points [-]

This reminds me of a Peter Geach quote: "The moral philosophers known as Objectivists would admit all that I have said as regards the ordinary uses of the terms good and bad; but they allege that there is an essentially different, predicative use of the terms in such utterances as pleasure is good and preferring inclination to duty is bad, and that this use alone is of philosophical importance. The ordinary uses of good and bad are for Objectivists just a complex tangle of ambiguities. I read an article once by an Objectivist exposing these ambiguities and the baneful effects they have on philosophers not forewarned of them. One philosopher who was so misled was Aristotle; Aristotle, indeed, did not talk English, but by a remarkable coincidence ἀγαθός had ambiguities quite parallel to those of good. Such coincidences are, of course, possible; puns are sometimes translatable. But it is also possible that the uses of ἀγαθός and good run parallel because they express one and the same concept; that this is a philosophically important concept, in which Aristotle did well to be interested; and that the apparent dissolution of this concept into a mass of ambiguities results from trying to assimilate it to the concepts expressed by ordinary predicative adjectives."

Comment author: mattnewport 17 April 2009 07:54:36AM 0 points [-]

Is the usual definition of utilitarianism taken to weight the outcomes for all people equally? While utilitarian arguments often lead to conclusions I agree with, I can't endorse a moral system that seems to say I should be indifferent to a choice between my sister being shot and a serial killer being shot. Is there a standard utilitarian position on such dilemmas?

Comment author: gjm 17 April 2009 09:04:29AM 4 points [-]

I fear you may be thinking "serial killer: karma -937; my sister: karma +2764".

A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those.

In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they're labelled "bad".

Comment author: mattnewport 17 April 2009 05:46:12PM 5 points [-]

In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?

Comment author: gjm 17 April 2009 06:51:13PM 2 points [-]

It's certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of "normal" situations; in situations as weirdly abnormal as you'd need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn't trust our intuition.

Comment author: mattnewport 17 April 2009 07:49:35PM 0 points [-]

And this is the crux of my problem with utilitarianism I guess. I just don't see any good reason to prefer it over my intuition when the two are in conflict.

Comment author: randallsquared 17 April 2009 09:27:19PM 1 point [-]

Even though your intuition might be wrong in outlying cases, it's still a better use of your resources not to think through every case, so I'd agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people.

It's better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of "almost-right", in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.

Comment author: mattnewport 17 April 2009 09:35:53PM 0 points [-]

My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people's utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.

Comment author: Virge 18 April 2009 04:30:05AM 3 points [-]

Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.

Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don't (in general) get weightings that will match your intuitions.

I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don't know at all.

In reality, the money I spend on my two cats could be spent improving the happiness of many humans - humans that I don't know at all who are living a long way away from me. Clearly I don't apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.

Comment author: Matt_Simpson 18 April 2009 04:43:14AM 0 points [-]

I think you are construing the term "utilitarianism" too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don't have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer's utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.

Comment author: mattnewport 18 April 2009 05:26:07AM *  0 points [-]

Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others' utility functions to respect or because individuals are allowed to choose subjective weights for others' utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.

Comment author: Matt_Simpson 18 April 2009 05:43:55AM 0 points [-]

Well, assuming you get to make up your own utility function, yes. However, I don't think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.

Comment author: Kingreaper 26 November 2010 04:25:04PM 1 point [-]

Yes. But if the "serial killer" is actually somone who enjoys helping others, who want to (and won't harm anyone when they), commit suicide; are they really a bad person at all?

Is shooting them really better than shooting a random person?

Comment author: SoullessAutomaton 17 April 2009 05:49:17PM 1 point [-]

In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?

Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?

Comment author: Kaj_Sotala 17 April 2009 10:46:37AM 0 points [-]

There isn't a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details.

There are plenty of variations of utilitarianism - negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.

Comment author: steven0461 17 April 2009 03:32:51PM *  1 point [-]

Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you're treating your family members preferentially, you may be maximizing your utility, but you're not following "utilitarianism" in that word's standard meaning.

Comment author: ciphergoth 17 April 2009 03:48:09PM 2 points [-]

The SEP agrees with you:

[...] classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:

[...] Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).

Comment author: MBlume 17 April 2009 04:01:49PM *  2 points [-]

The SEP

For just a moment I was thinking "How is the Somebody Else's Problem field involved?"

Comment author: conchis 17 April 2009 04:16:16PM *  2 points [-]

I'd put a slight gloss on this.

The problem is that that "utilitarianism", as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there's definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.

As much of this literature is in economics, perhaps it's non-standard in philosophy. It does, however, have a fairly long pedigree.

Comment author: Kaj_Sotala 17 April 2009 08:36:29PM 0 points [-]

Contra Kaj, however, even this sense does seem to exclude agent-relative weights.

Utilitarianism that includes animals vs. utilitarianism that doesn't include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn't seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).

Classical utilitarianism is more strictly defined, but as you point out, we're not talking about just classical utilitarianism here.

Comment author: conchis 17 April 2009 09:09:32PM *  1 point [-]

I don't think that's a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it's not that I am supposed to have a special obligation to humans because I'm human; it's that only humans are supposed to matter at all.

In any event, the point wasn't that agent relative weights don't make sense, it's that they're not part of a standard definition of utilitarianism, even in a broad sense. I still think that's accurate characterization of professional usage, but if you have specific examples to the contrary, I'd be open to changing my mind.

Gratuitous nitpick: humans are animals too.

Comment author: Kaj_Sotala 18 April 2009 07:46:05AM *  1 point [-]

You may be right. But we're inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport's question:

You're right, by most interpretations utilitarianism does weigh everybody equally. However, if that's the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?

Comment author: conchis 18 April 2009 02:37:30PM *  1 point [-]

I completely agree with your reframing.

I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.

Comment author: Kaj_Sotala 19 April 2009 07:32:22PM 0 points [-]

No problem. It happens.

Comment author: AndySimpson 17 April 2009 03:48:25PM 0 points [-]

In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.

Comment author: Peter_Twieg 17 April 2009 02:40:47PM 0 points [-]

Utilitarianism doesn't describe how you should feel, it simply describes "the good". It's very possible that if accepting utilitarianism's implications is so abhorrent to you that the world would be a worse place because you do it (because you're unhappy, or because embracing utilitarianism might actually make you worse at promoting utility), then by all means... don't endorse it, at least not at some given level you find repugnant. This is what Derek Parfit labels a "self-effacing" philosophy, I believe.

There are a variety of approaches to actually being a practicing utilitarian, however. Obviously we don't have the computational power required to properly deduce every future consequence of our actions, so at a practical level utilitarians will always support heuristics of some sort. One of these heuristics may dictate that you should always prefer serial killers to be shot over your sister for the kinds of reasons that gjm describes. This might not always lead to the right conclusion from a utilitarian perspective, but it probably wouldn't be a blameworthy one, as you did the best you could under incomplete information about the universe.

Comment author: Boyi 06 December 2011 02:18:24PM -2 points [-]

Hi, I really enjoyed your essay. I also enjoyed the first half of the comments. The question it brought me to was: whether or not there is no higher utilty than transformation? I was wondering if I could hear your opinion on this matter.

It seems to me if transformation of external reality is the primer assesment of utility, then humans should ratioanlity question their emotivism based on pratical solutions. But what if the abiilty to transform external reality was not teh primer assesement of utility? Recently I have been immersed in Confucian thinkinng, which places harmony as the pinnicale of importance. If you do not mind I would like to share some thoughts from this perspetive.

When faced with a problem it seems that as humasn our inital solution is to increase the complexity of our interaction with said aspect of the external world through expanding scale, organization, detail, of our involvement with that portion of reality in hopes of transforming that reality to our will. Is this logical? Yes, we have clearly demonstrated a potential to transform reality, but have any of our transformations justify the rationale that transformation will eventually lead to a uptoian plateau? Or to put it another way, does the transformation of one good/bad scenario ever completely deplete the nessecity for further transformation? If anything, it seems that our greatest acheivements of transformation have only created an even more dire need for transformation. The creation of nuclear power/weapons was supposed to end war and provide universal energy; now we are faced with the threat of nuclear waste and global anhilation. Genetically engineering food was supposed to feed the world; in ameriac we have created a obessity epidemic, and the modern agricultral practices of the world walk a fine line between explosive yeild and ecological destruction.

I was somewhat hesitant to say it because of a preceived emotivism of this blog, but what I am questioning is the discourse of progress. Transformation is progress. You say:

"In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism." But is that not soley based on a emotive value of progress?

From the harmonizing perspective emotivism in itself contains utilty because it is in our common irratioanlity that humans can truly relate. If we did institutionally preceed arbitrary value wtih a logic of transformational utility would this not marganilze a huge portion of humanity that is not properly equipped to rationalize action in such a way? It legitimizes intellectual dominace. In my opinion this is no different than if we were to say that whoever wins in an offical arm wrestle/ foot race has the correct values. That may seem completely absurd to you, but I would argue only because you are intellectually rather than physically dominate.

It should be noted that my argument is based on the premise that there are graduated levels of intellegence, and the level required to rationalize one potential transformation over another is sequesterd from the lower tiers.

I also write under the assumption that the discourse of progress (I think I called it utiltiy of transformation?) is emotive not rational in the sense that it is clearly the most effective cogntive paradigm for human evolution. Before my words come back to bite me, my concepts of "progress" and "evolution" are very different here. Progress is power to transform external reality (niche construction), evolution is transformation of the human structure (I will not comment on whether such orgnaic transformation is orthogenic or not)