This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.
Well, this seems to be a bigger debate than I thought I was getting into. It's tangential to any point I was actually trying to make, but it's interesting enough that I'll bite.
I'll try and give you a description of my point of view so that you can target it directly, as nothing you've given me so far has really put much of a dent in it. So far I just feel like I'm suffering from guilt by association - there's people out there saying "morality is defined as God's will", and as soon as I suggest it's anything other than some correlated preferences I fall in their camp.
Consider first the moral views that you have. Now imagine you had more information, and had heard some good arguments. In general your moral views would "improve" (give or take the chance of specifically misrepresentative information or persuasive false arguments, which in the long run should eventually be cancelled out by more information and arguments). Imagine also that you're smarter, again in general your moral views should improve. You should prefer moral views that a smarter, better informed version of yourself would have to your current views.
Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality. This "existance" is not the same as being somehow "woven into the fabric of the univers". Aliens could not discover it by studying physics. It "exists", but only in the sense that Aleph 1 exists or "the largest number ever to be uniquely described by a non-potentially-self-referential statement" exists. If I don't like what it says, that's by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views (I'm referring here to one of Eliezer's criticisms of moral realism).
So, if I bravely assume you accept that this limit exists, I can imagine you might claim that it's still subjective, in that it's the limit of an individual person's views as their information and intelligence approach perfection. However, I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion. As such, the only thing left to break the symmetry between two different perfectly intelligent and completely informed beings is the simple fact of them being different people. This is where I bring in the difference between morality and preference. I basically define morality as being being about what's best for everyone in general, as opposed to preference which is what's best for yourself. Which person in the universe happens to be you should simply not be an input to morality. So, this limit is the same rational process, the same information, and not a function of which person you are, therefore it must be the same for everyone.
Now at least you have a concrete argument to shoot at rather than some statements suggesting I fall into a particular bucket.
I'll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it's really big.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman's limit.
... (read more)