This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.
Here is a more difficult scenario:
I am a mind uploaded to a computer and I hate everyone except me. Seeing people dead would make me happy; knowing they are alive makes me suffer. (The suffering is not big enough to make my life worse than death.)
I also have another strong wish -- to have a trillion identical copies of myself. I enjoy the company of myself, and trillion seems like a nice number.
What is the Friendly AI, the ruler of this universe, supposed to do?
My life is not worse than death, so there is nothing inherently unethical in me wanting to have a trillion copies of myself, if that is economically available. All those copies will be predictably happy to exist, and even happier to see their identical copies around them.
However, in the moment when my trillion identical copies exist, their total desire to see everyone else dead will become greater than the total desire of all others to live. So it would be utility maximizing to kill the others.
Should the Friendly AI allow it or disallow it... and what exactly would be its true rejection?
There are lots of hippo-fighting things I could say here, but handwaving a bit to accept the thrust of your hypothetical... a strictly utilitarian FAI of course agrees to kill everyone else (2) and replace them with copies of you (1). As J_Taylor said, utility monsters are wily beasts.
I find this conclusion intuitively appalling. Repugnant, even, Which is no surprise; my ethical intuitions are not strictly utilitarian. (3)
So one question becomes, are the non-utilitarian aspects of my ethical intuitions something that can be applied on these sorts of scales... (read more)