In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
OK. I'll reply here because if I reply there, you won't get the notifications.
The crux of your argument, it seems to me, is the following intuition:
This is certainly a property we would want morality to have, and one which human beings naturally assume it must have– but is that the central property of it? Should it turn out that nothing which looks like morality has this property, does it logically follow that all morality is dead, or is that reaction just a human impulse?
(I will note, with all the usual caveats, that believing one's moral sentiments to be universal in scope and not based on preference is a big advantage in object-level moral arguments, and that we happen to be descended from the winners of arguments about tribal politics and morality.)
If a certain set of moral impulses involves shared standards common to, say, every sane human being, then moral arguments would still work among those human beings, in exactly the way you would want them to work across all intelligent beings. Frankly, that's good enough for me. Why give baby-eating aliens in another universe veto powers over every moral intuition of yours?
Thanks for the reply -- I find this a very interesting topic. One thing I should clarify is that my view doesn't entail giving aliens "veto powers", as you put it; an alternative response is to take them to be unreasonable to intrinsically desire the eating of babies. That isn't an intrinsically desirable outcome (I take it), i.e. there is no reason to desire such a thing. Stronger still, we may think it intrinsically undesirable, so that insofar as an agent has such desires they are contrary to reason. (This requires a substantive notion of r... (read more)