In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
When I read the meta-ethics sequence I mostly wondered why he made it so complicated and convoluted. My own take just seems a lot simpler --- which might mean it's wrong for a simple reason, too. I'm hoping someone can help.
I see ethics as about adopting some set of axioms that define which universes are morally preferable to others, and then reasoning from those axioms to decide whether an action, given the information available, has positive expected utility.
So which axioms should I adopt? Well, one simple, coherent answer is "none": be entirely nihilist. I would still prefer some universes over others, as I'd still have all my normal non-moral preferences, such as appetites etc. But it'd be all about me, and other people's interests would only count so far as they were instrumental to my own.
The problem is that the typical human mind has needs that are incompatible with nihilism. Nihilism thus becomes anti-strategic: it's an unlikely path to happiness. I feel the need to care about other people, and it doesn't help me to pretend I don't.[1]
So, nihilism is an anti-strategic ethical system for me to adopt, because it goes against my adapted and culturally learned intuitions about morality --- what I'll call my Emotional Moral Compass. My emotional moral compass defines my knee jerk reactions to what's right and what's not. Unfortunately, these knee jerk reactions are hopelessly contradictory. The strength of my emotional reaction to an injustice is heavily influenced by my mood, and can be primed easily. It doesn't scale properly. It's dominated by the connection I feel to the people involved, not by what's happening. And I know that if I took my emotional moral compass back in time, I'd almost certainly get the wrong result to questions that now seem obvious, such as slavery.
I can't in full reflection agree to define "rightness" with the results of my emotional moral compass, because I also have an emotional need for my beliefs to be internally consistent. I know that my emotional moral compass does not produce consistent judgments. It also does not reliably produce judgments that I would want other people to make. This is problematic because I have a need to believe that I'm the sort of person I would approve of if I were not me.
I really did try on nihilism and discard it, before trying to just follow my emotional moral compass, and discarded that too. Now I'm roughly a preference utilitarian. I'm working on trying to codify my ideas into axioms, but it's difficult. Should I prefer universes that maximise mean weighted preferences? But then what about population differences? How do I include the future? Is there a discounting rate? The details are surprisingly tricky, which may suggest I'm on the wrong track.
Adopting different ethical axioms hasn't been an entirely hand-waving sort of gesture. When I was in my "emotional moral compass" stage, I became convinced that a great many animals suffered a great deal in the meat industry. My answer to this was that eating meat still felt costless --- I have no real empathy with chickens, cows or pigs, and the magnitude of the problem left me cold (since my EMC can't do multiplication). I didn't feel guilty, so my EMC didn't compel me to do anything differently.
This dissonance got uncomfortable enough that I adopted Peter Singer's version of preference utilitarianism as an ethical system, and began to act more ethically. I set myself a deadline of six months to become vegetarian, and resolved to tithe to the charity I determined to have maximum utility once I got a post-degree job.
If ethics are based on reasoning from axioms, how do I deal with people who have different axioms from me? Well, one convenient thing is that few people adopt terrible axioms that have them preferring universes paved in paperclips or something. Usually people's ethics are just inconsistent.
A less convenient universe would present me with someone who had entirely consistent ethics based on completely different axioms that led to different judgments from mine, and maximising the resulting utility function would make the person feel happy and fulfilled. Ethical debate with this person would be fruitless, and I would have to regard them as On the Wrong Team. We want irreconcilably different things. But I couldn't say I was more "right" than they, except with special reference to my definition of "right" in preference to theirs.
[1] Would I change my psychology so that I could be satisfied with nihilism, instead of preference utilitarianism? No, but I'm making that decision based on my current values. Switching utilitarian-me for nihilist-me would just put another person On the Wrong Team, which is a negative utility move based on my present utility function. I can't want to not care while currently caring, because my current caring ensures that I care about caring.
There's also no reason to believe that it would be easier to be satisfied with this alternate psychology. Sure, satisfying ethics requires me to eat partially against my taste preferences, and my material standard of living takes an inconsequential hit. But I gain this whole other dimension of satisfaction. In other words, I get an itch that it costs a lot to scratch, but having scratched it I'm better off. A similar question would be, would I choose to have zero sexual or romantic interest, if I could? I emphatically answer no.
Isn't it a bit late for that question for any human, by the time a human can formulate the question?
You don't really have the option of adopting it, just espousing it (including to yourself). No?
You really could, all else equal, because all the (other) humans have, as you said, very similar axioms rather than terrible ones.