On Wei_Dai's complexity of values post, Toby Ord writes:
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.
Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).
This kind of "moral realism" behaves, to all extents and purposes, like antirealism.
- You don't favor shorter long-list definitions of goodness over longer ones. The criteria for choosing the list have little to do with its length, and more with what a human brain emulation with such-and-such modifications to make it believe only and all relevant true empirical facts would decide once it had reached reflective moral equilibrium.
- Agents who have a different "long list" definition cannot be moved by the fact that you've declared your particular long list "true goodness".
- There would be no reason to expect alien races to have discovered the same long list defining "true goodness" as you.
- An alien with a different "long list" than you, upon learning the causal reasons for the particular long list you have, is not going to change their long list to be more like yours.
- You don't need to use probabilities and update your long list in response to evidence, quite the opposite, you want it to remain unchanged.
I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.
Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".
Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory.
But then morality does not have as its subject matter "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.
I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like "decision theory".
I think you can keep that definition: define morality and morality-human. However, at least in the metaethics sequence, it would have done a lot of good to distinguish between morality-Joe and morality-Jane even if you were eventually going to argue that the two were equivalent. Once you're finished arguing that point, however, go on using the term "morality" the way you want to.
I only say this because of my own experience. I didn't really understand the metaethics sequence when I fir... (read more)