Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
V(Elves, ) = Christmas spirity
V(Pebblesorters, ) = primality
V(Humans, _ ) = morality
If V(Humans, Alice) =/= V(Humans, ) that doesn't make morality subjective, it is rather i...
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff
That doesn't generalise to the point that non humans have no morality. You have m...
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.
"There are different sets of self-consistent values." This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean "logically consistent set of values", but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don't think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary -- it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how "good" has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
Any virulently self-reproducing meme would be another.