What is sometimes called "the 1000 Sadists problem" is a classic "problem" in utilitarianism; this post is another version of it.
Here's another version, which apparently comes from this guy's homework:
...Suppose that the International Society of Sadists is holding its convention in Philadelphia and in order to keep things from getting boring the entertainment committee is considering staging the event it knows would make the group the happiest, randomly selecting someone off the street and then torturing that person before the whole convention. One member of the group, however, is taking Phil. 203 this term and in order to make sure that such an act would be morally okay insists that the committee consult what a moral philosopher would say about it. In Smart's essay on utilitarianism they read that "the only reason for performing an action A rather than an alternative action B is that doing A will make mankind (or, perhaps, all sentient beings) happier than will doing B." (Smart, p. 30) This reassures them since they reason that the unhappiness which will be felt by the victim (and perhaps his or her friends and relatives) will be far outweighed by the
(shrug) Sure, I'll bite this bullet.
Yes, if enough people are made to suffer sufficiently by virtue of my existence, and there's no way to alleviate that suffering other than my extermination, then I endorse my extermination.
To do otherwise would be unjustifiably selfish.
Which is not to say I would necessarily exterminate myself, if I had sufficiently high confidence that this was the case... I don't always do what I endorse.
And if it's not me but some other individual or group X that has that property in that hypothetical scenario, I endorse X's extermination as well.
And, sure, if you label the group in an emotionally charged way (e.g., "Nazis exterminating Jews" as you do here), I'll feel a strong emotional aversion to that conclusion (as I do here).
It's not any of their business anyway.
If things that make me unhappy aren't my business, what is my business?
But whether your existence makes me unhappy or not, you are, of course, free not to care.
And even if you do care, you're not obligated to alleviate my unhappiness. You might care, and decide to make me more unhappy, for whatever reasons.
And, sure, we can try to kill each other as a consequence of all that.
It's not clear to me what ethical question this resolves, though.
Well, this can easily become a costly signalling issue when the obvious (from the torture-over-speck-supporter's perspective) comment would read "it is rational for the Nazis to exterminate the Jews". I would certainly not like to explain having written such a comment to most people. Claiming that torture is preferable to dust specks in some settings is comparably harmless.
Given this, you probably shouldn't expect honest responses from a lot of commenters.
if you are a specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to [harm Jews]
The use of "specker" to denote people who prefer torture to specks can be confusing.
But wouldn't that defeat the purpose, or am I missing something? I understood the offensiveness of the specific example to be the point.
Trolling usually means disrupting the flow of discussion by deliberate offensive behaviour towards other participants. It usually doesn't denote proposing a thought experiment with a possible solution that is likely to be rejected for its offensiveness. But this could perhaps be called "trolleying".
I've considered using neutral terms, but then it is just too easy to say "well, it just sucks to be you, Neptunian, my rational anti-dust-specker approach requires you to suffer!"
It's a bad sign if you feel your argument requires violating Godwin's Law in order to be effective, no?
Not strictly. It's still explicitly genocide with Venusians and Neptunians -- it's just easier to ignore that fact in the abstract. Connecting it to an actual genocide causes people to reference their existing thinking on the subject. Whether or not that existing thinking is applicable is open for debate, but the tactic's not invalid out of hand.
As discussed there, pointing out that it has this feature isn't always the worst argument in the world. If you have a coherent reason why this argument is different from other moral arguments that require Godwin's Law violations for their persuasiveness, then the conversation can go forward.
EDIT: (Parent was edited while I was replying.) If "using Jews and Nazis as your example because replacing them with Venusians and Neptunians would fail to be persuasive" isn't technically "Godwin's Law", then fine, but it's still a feature that correlates with really bad moral arguments, unless there's a relevant difference here.
It is just a logical conclusion from "dust specks". You can/must do horrible things to a small minority, if a large majority members benefit a little from that.
Another part of the Sequence I reject.
Imagine if humanity survives for the next billion years, expands to populate the entire galaxy, has a magnificent (peaceful, complex) civilization, and is almost uniformly miserable because it consists of multiple fundamentally incompatible subgroups. Nearly everyone is essentially undergoing constant torture, because of a strange, unfixable psychological quirk that creates a powerful aversion to certain other types of people (who are all around them).
If the only alternative to that dystopian future (besides human extinction) is to exterminate some subgroup of humanity, then that creates a dilemma: torture vs. genocide. My inclination is that near-universal misery is worse than extinction, and extinction is worse than genocide.
And that seems to be where this hypothetical is headed, if you keep applying "least convenient possible world" and ruling out all of the preferable potential alternatives (like separating the groups, or manipulating either group's genes/brains/noses to stop the aversive feelings). If you keep tailoring a hypothetical so that the only options are mass suffering, genocide, and human extinction, then the conclusion is bound to be pretty repugnant. None of those bullets are particularly appetizing but you'll have to chew on one of them. Which bullet to bite depends on the specifics; as the degree of misery among the aversion-sufferers gets reduced from torture-levels towards insignificance at some point my preference ordering will flip.
This looks like an extension of Yvain's post on offense vs. harm-minimization, with Jews replacing salmon and unchangeable Nazis replacing electrode-implanted Brits.
The consequentialist argument, in both cases, is that if a large group of people are suffering, even if that suffering is based on some weird and unreasonable-seeming aversion, then indefinitely maintaining the status quo in which that large group of people continues to suffer is not a good option. Depending how you construct your hypothetical scenario, and how eager your audience is to play along, you can rule out all of the alternative courses of action except for ones that seem wrong.
The assumption "their terminal values are fixed to hate group X" is something akin to "this group is not human, but aliens with an arbitrary set of values that happen to mostly coincide with traditional human values, but with one exception." Which is not terribly different from "These alien race enjoys creativity and cleverness and love and other human values... but also eats babies."
Discussion of human morality only makes sense when you're talking about humans. Yes, arbitrary groups X and Y may, left to their own devices, find it rational to do all kinds of things we find heinous, but then you're moving away from morality and into straight up game theory.
Isn't it ODD that in a world of Nazis and Jews, me who is neither is being asked to make this decision? If I were a Nazi, I'm sure what my decision is going to be. If I were a Jew, I'm sure what my decision is going to be.
Actually, now that I think about it, this will be a huge problem if and when humanity, in need of new persons to speak to, decides to uplift animals. It is an important question to ask.
It is always rational for the quasi-Nazis to kill the quasi-Jews, from the Nazi perspective. It's just not always rational for me to kill the Jews - just because someone else wants something, doesn't mean I care.
But if I care about other people in any concrete way, you could modify the problem only slightly in order to have the Nazis suffer in some way I care about because of their hatred of the Jews. In which case, unless my utility is bounded, there is indeed some very large number that corresponds to when it's higher-utility to kill the Jews than to do nothing.
Of course, there are third options that are better, and most of them are even easier than murder, meaning that any agent like me isn't actually going to kill any Jews, they'll have e.g. lied about doing so long before.
...If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?
If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What t
I suspect what you mean by desire utilitarianism is what wikipedia calls preference utilitarianism, which I believe is the standard term.
Of course I wouldn't exterminate the Jews! I'm a good human being, and good human beings would never endorse a heinous action like that. Those filthy Nazis can just suck it up, nobody cares about their suffering anyway.
The mistake here is in saying that satisfying the preferences of other agents is always good in proportion to the number of agents whose preference is satisfied. While there have been serious attempts to build moral theories with that as a premise, I consider them failures, and reject this premise. Satisfying the preferences of others is only usually good, with exceptions for preferences that I strongly disendorse, independent of the tradeoffs between the preferences of different people. Also, the value of satisfying the same preference in many people grows sub-linearly with the number of people.
Hm.
I suppose, if LW is to be consistent, comments on negatively voted posts should incur the same karma penalty that comments on negatively voted comments do.
How important is the shape of the noses to the jewish people?
Consider a jew is injured in an accident and the best reconstruction that is present restores the nose to a nazi shape and not a jew one. How would his family react? How different will be his ability to achieve his life's goals and his sense of himself?
How would a nazi react to such a jew?
If the aspect of the Jews that the Nazis have to change is something integral to their worldview, then a repugnant conclusion becomes sort of inevitable.
Till then, pull on the rope sideways. Try to save as many people as possible.
First, I'm going to call them 'N' and 'J', because I just don't like the idea of this comment being taken out of context and appearing to refer to the real things.
Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it's so big that you run into a number of practical problems first. I'm going to run through as many places where this falls down in practice as I can, even if others have mentioned some.
If the Nazis have some built-in value that determines that they hate something utterly arbitrary, then why don't we exterminate them?
What if I place 0 value (or negative value (which is probably what I really do, but what I wish I did was to put zero value on it)) on the kind of satisfaction or peace of mind the Nazis get from knowing the Jews are suffering?
Relevant: Could Nazi Germany seeding the first modern anti-tobacco movement have resulted in an overall net gain in public utility till date?
I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.
Edit: removed a bad example of qualia comparison.
If the Nazis are unable to change their terminal values, then Good|Nazi has a substantial difference compared to what we mean when we say Good. Nazis might use the same word, or it might translate as "the same." It might even be similar along many dimensions. Good|Jew might be the same as Good (they don't seem substantially different then humans) although this isn't required by the problem, but Good|Nazi ends up being something that I just don't care about in the case where we are talking about exterminating Jews.
There might be other conditions w...
You know... purposely violating Godwin's Law seems to have become an applause light around here, as if we want to demonstrate how super rational we are that we don't succumb to obvious fallacies like Nazi analogies.
One idea that I have been toying since I read Eliezer's various posts on the complexity of value is that the best moral system might not turn out to be about maximizing satisfaction of any and all preferences, regardless of what those preferences are. Rather, it would be about increasing the satisfaction of various complex, positive human values, such as i.e. "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc." If this is the case then it may well...
What's more important to you, your desire to prevent genocide or your desire for a simple consistent utility function?
It is taking some effort to not make a sarcastic retort to this. Please refrain from using such absurdly politically-loaded examples in future. It damages the discussion.
This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.