On Wei_Dai's complexity of values post, Toby Ord writes:
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.
Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).
This kind of "moral realism" behaves, to all extents and purposes, like antirealism.
- You don't favor shorter long-list definitions of goodness over longer ones. The criteria for choosing the list have little to do with its length, and more with what a human brain emulation with such-and-such modifications to make it believe only and all relevant true empirical facts would decide once it had reached reflective moral equilibrium.
- Agents who have a different "long list" definition cannot be moved by the fact that you've declared your particular long list "true goodness".
- There would be no reason to expect alien races to have discovered the same long list defining "true goodness" as you.
- An alien with a different "long list" than you, upon learning the causal reasons for the particular long list you have, is not going to change their long list to be more like yours.
- You don't need to use probabilities and update your long list in response to evidence, quite the opposite, you want it to remain unchanged.
I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.
Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".
Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory.
I have the same feeling, from the other direction.
I feel like I completely understand the error you're warning against in No License To Be Human; if I'm making a mistake, it's not that one. I totally get that "right", as you use it, is a rigid designator; if you changed humans, that wouldn't change what's right. Fine. The fact remains, however, that "right" is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn't decide to single out and call "right", and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, "This is a nice thing we've got going here; let's preserve it."
Yes, of course that doesn't constitute a general license to look at the brains of whatever species you happen to be a member of to decide what's "right"; if the Babyeaters or Pebblesorters did this, they'd get the wrong answer. But that doesn't change the fact that there's no way to convince Babyeaters or Pebblesorters to be interested in "rightness" rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it's a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness -- that's why moral realism is a mistake!