Your first suggestion isn't an additional alternative, it's just a subdivision within 4 or 5.
Perhaps, but it seems like there's a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Although I repeatedly use "preferences" and "values" in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Sure. 1. Most intelligent beings in the multiverse end up sharing similar moralities. This came about because there are facts about what morals one should have. For example, suppose there are facts about what preferences one should have along with facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equival...
In this post, I list six metaethical possibilities that I think are plausible, along with some arguments or plausible stories about how/why they might be true, where that's not obvious. A lot of people seem fairly certain in their metaethical views, but I'm not and I want to convey my uncertainty as well as some of the reasons for it.
(Note that for the purposes of this post, I'm concentrating on morality in the axiological sense (what one should value) rather than in the sense of cooperation and compromise. So alternative 1, for example, is not intended to include the possibility that most intelligent beings end up merging their preferences through some kind of grand acausal bargain.)
It may be useful to classify these possibilities using labels from academic philosophy. Here's my attempt: 1. realist + internalist 2. realist + externalist 3. relativist 4. subjectivist 5. moral anti-realist 6. normative anti-realist. (A lot of debates in metaethics concern the meaning of ordinary moral language, for example whether they refer to facts or merely express attitudes. I mostly ignore such debates in the above list, because it's not clear what implications they have for the questions that I care about.)
One question LWers may have is, where does Eliezer's metathics fall into this schema? Eliezer says that there are moral facts about what values every intelligence in the multiverse should have, but only humans are likely to discover these facts and be motivated by them. To me, Eliezer's use of language is counterintuitive, and since it seems plausible that there are facts about what everyone should value (or how each person should translate their non-preferences into preferences) that most intelligent beings can discover and be at least somewhat motivated by, I'm reserving the phrase "moral facts" for these. In my language, I think 3 or maybe 4 is probably closest to Eliezer's position.