In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
As I understand it, Eliezer has taken the position that human values are too complex for humans to reliably formalize, and that all formalizations presented so far are or probably are incorrect. This may explain some of your difficulty in trying to find Eliezer's preferred formalization.
One project is the descriptive one of moral psychology and moral anthropology. Because Coherent Extrapolated Volition begins with data from moral psychology and moral anthropology, that descriptive project is important for Eliezer's design of Friendly AI. Certainly, I agree with Eliezer that human values are too complex to easily formalize, because our terminal values are the product of millions of years of messy biological and cultural evolution.
"Morality" is a term usually used in speech acts to refer to a set of normative questions about what ... (read more)