In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
Damn. I still haven't had my "Aha!" moment on this. I'm glad that ata, at least, appears to have it, but unfortunately I don't understand ata's explanation, either.
I'll understand if you run out of patience with this exercise, but I'm hoping you won't, because if I can come to understand your meta-ethical theory, then perhaps I will be able to explain it to all the other people on Less Wrong who don't yet understand it, either.
Let me start by listing what I think I do understand about your views.
1. Human values are complex. As a result of evolution and memetic history, we humans value/desire/want many things, and our values cannot be compressed to any simple function. Certainly, we do not only value happiness or pleasure. I agree with this, and the neuroscience supporting your position is nicely summarized in Tim Schroeder's Three Faces of Desire. We can value damn near anything. There is no need to design an artificial agent to value only one thing, either.
2. Changing one's meta-ethics need not change one's daily moral behavior. You write about this here, and I know it to be true from personal experience. When deconverting from Christianity, I went from divine command theory to error theory in the course of about 6 months. About a year after that, I transitioned from error theory to what was then called "desire utilitarianism" (now called "desirism"). My meta-ethical views have shifted in small ways since then, and I wouldn't mind another radical transition if I can be persuaded. But I'm not sure yet that desirism and your own meta-ethical theory are in conflict.
3. Onlookers can agree that Jenny has 5 units of Fred::Sexiness, which can be specified in terms of curves, skin texture, etc. This specification need not mention Fred at all. As explained here.
4. Recursive justification can't "hit bottom" in "an ideal philosophy student of perfect emptiness"; all I can do is reflect on my mind's trustworthiness, using my current mind, in a process of something like reflective equilibrium, even though reflective coherence isn't specified as the goal.
5. Nothing is fundamentally moral. There is nothing that would have value if it existed in an isolated universe all by itself that contained no valuers.
Before I go on... do I have this right so far?
1-4 yes.
5 is questionable. When you say "Nothing is fundamentally moral" can you explain what it would be like if something was fundamentally moral? If not, the term "fundamentally moral" is confused rather than untrue; it's not that we looked in the closet of fundamental morality and found it empty, but that we were confused and looking in the wrong closet.
Indeed my utility function is generally indifferent to the exact state of universes that have no observers, but this is a contingent fact about me rather than a necessary truth of ... (read more)