Your first suggestion isn't an additional alternative, it's just a subdivision within 4 or 5.
I'm not sure I understand the second one. Are you trying to draw the distinction between consequentialism and non-consequentialist moralities? If so, I think that is usually considered to be a distinction in normative ethics rather than metaethics. Although I repeatedly use "preferences" and "values" in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Your first suggestion isn't an additional alternative, it's just a subdivision within 4 or 5.
Perhaps, but it seems like there's a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Although I repeatedly use "preferences" and "values" in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Co...
In this post, I list six metaethical possibilities that I think are plausible, along with some arguments or plausible stories about how/why they might be true, where that's not obvious. A lot of people seem fairly certain in their metaethical views, but I'm not and I want to convey my uncertainty as well as some of the reasons for it.
(Note that for the purposes of this post, I'm concentrating on morality in the axiological sense (what one should value) rather than in the sense of cooperation and compromise. So alternative 1, for example, is not intended to include the possibility that most intelligent beings end up merging their preferences through some kind of grand acausal bargain.)
It may be useful to classify these possibilities using labels from academic philosophy. Here's my attempt: 1. realist + internalist 2. realist + externalist 3. relativist 4. subjectivist 5. moral anti-realist 6. normative anti-realist. (A lot of debates in metaethics concern the meaning of ordinary moral language, for example whether they refer to facts or merely express attitudes. I mostly ignore such debates in the above list, because it's not clear what implications they have for the questions that I care about.)
One question LWers may have is, where does Eliezer's metathics fall into this schema? Eliezer says that there are moral facts about what values every intelligence in the multiverse should have, but only humans are likely to discover these facts and be motivated by them. To me, Eliezer's use of language is counterintuitive, and since it seems plausible that there are facts about what everyone should value (or how each person should translate their non-preferences into preferences) that most intelligent beings can discover and be at least somewhat motivated by, I'm reserving the phrase "moral facts" for these. In my language, I think 3 or maybe 4 is probably closest to Eliezer's position.