This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.
I'll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it's really big.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman's limit.
So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?
You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.
You can't just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as "the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don't like this requirement, it's by definition because you're misinformed or stupid.
The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has "information about morals". Morals are just a kind of preferences. You can only have information about some particular person's morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn't mean your and my morals are the same. Your argument is circular.
Well, first of all, that's not how everyone else uses the word morals. Normally we would say that your morals are to do what's best for everyone; while my morals are something else. Calling your personal morals "simply morals", is equivalent to saying that my (different) morals shouldn't be called by the name morals or even "Daniel's morals", which is simply wrong.
As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of "zero utility", different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have "repugnant conclusions"). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).
We obviously have a different view on the subjectivity of morals, no doubt an argument that's been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.
To me, subjective morals like you talk about clearly exist, but I don't see them as interesting in their own right. They're just preferences people have about other people's business. Interesting for the reasons any preference is interesting but no different.
The fundamental requirement for objective mo... (read more)