In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
In a manner of speaking, yes. Moral facts are facts about the output of a particular computation under particular conditions, so they are "part of the natural world" essentially to whatever extent you'd say the same thing about mathematical deductions. (See Math is Subjunctively Objective, Morality as Fixed Computation, and Abstracted Idealized Dynamics.)
No. Caring about people's preferences is part of morality, and an important part, I think, but it is not the entirety of morality, or the source of morality. (I'm not sure what a "source of normativity" is; does that refer to the causal history behind someone being moved by a moral argument, or something else?)
(The "Moral facts are not written into the 'book' of the universe" bit is correct.)
See Inseparably Right and No License To Be Human. "Should" is not defined by your terminal values or preferences; although human minds (and things causally entangled with human minds) are the only places we can expect to find information about morality, morality is not defined by being found in human minds. It's the other way around: you happen to care about(/prefer/terminally value) being moral. If we defined "should" such that an agent "should" do whatever satisfies its terminal values (such that pebblesorters should sort pebbles into prime heaps, etc.), then morality would be a Type 2 calculator; it would have no content, it could say anything and still be correct about the question it's being asked. I suppose you could define "should" that way, but it's not an adequate unpacking of what humans are actually thinking about when they talk about morality.
Agreed 100% with this.
Of course, it doesn't follow that what humans talk about when we talk about morality has the properties we talk about it having, or even that it exists at all, any more than analogous things follow about what humans talk about when we talk about Santa Claus or YHWH.
To say that "I happen to care about being... (read more)