This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.
Imagine if humanity survives for the next billion years, expands to populate the entire galaxy, has a magnificent (peaceful, complex) civilization, and is almost uniformly miserable because it consists of multiple fundamentally incompatible subgroups. Nearly everyone is essentially undergoing constant torture, because of a strange, unfixable psychological quirk that creates a powerful aversion to certain other types of people (who are all around them).
If the only alternative to that dystopian future (besides human extinction) is to exterminate some subgroup of humanity, then that creates a dilemma: torture vs. genocide. My inclination is that near-universal misery is worse than extinction, and extinction is worse than genocide.
And that seems to be where this hypothetical is headed, if you keep applying "least convenient possible world" and ruling out all of the preferable potential alternatives (like separating the groups, or manipulating either group's genes/brains/noses to stop the aversive feelings). If you keep tailoring a hypothetical so that the only options are mass suffering, genocide, and human extinction, then the conclusion is bound to be pretty repugnant. None of those bullets are particularly appetizing but you'll have to chew on one of them. Which bullet to bite depends on the specifics; as the degree of misery among the aversion-sufferers gets reduced from torture-levels towards insignificance at some point my preference ordering will flip.
I noticed something similar in another comment. CEV must compare the opportunity cost of pursuing a particular terminal value at the expense of all other terminal values, at least in a universe with constrained resources. This leads me to believe that CEV will suggest that the most costly (in terms of utility opportunity lost by choosing to spend time fulfilling a particular terminal value instead of another) terminal value be abandoned until only one is left and we become X maximizers. This might be just fine if X is still humane, but it seems like any... (read more)