In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
I think the term nihilism is getting in the way here. Let's instead talk about "the zero axiom system". This is where you don't say that any universes are morally preferable to any others. They may be appetite-preferable, love-for-people-close-to-you preferable, etc.
If no universes are morally preferable, one strategy is to be as ruthlessly self-serving as possible. I predict this would fail to make most people happy, however, because most people have a desire to help others as well as themselves.
So a second strategy is to just "go with the flow" and let yourself give as much as your knee-jerk guilt or sympathy-driven reactions tell you to. You don't research charities and you still eat meat, but maybe you give to a disaster relief appeal when the people suffering are rich enough or similar enough to you to make you sympathetic.
All I'm really saying is that this second approach is also anti-strategic once you get to a certain level of self-consistency, and desire for further self-consistency becomes strong enough to over-rule desire for some other comforts.
I find myself in a bind where I can't care nothing, and I can't just follow my emotional moral compass. I must instead adopt making the world a better place as a top-level goal, and work strategically to make that happen. That requires me to adopt some definition of what constitutes a better universe that isn't rooted in my self-interest. In other words, my self-interest depends on having goals that don't themselves refer to my self-interest. And those goals have to do that in entirely good-faith. I can't fake this, because that contradicts my need for self-consistency.
In other words, I'm saying that someone becomes vegetarian when their need for a consistent self-image about whether they behave morally starts to over-rule the sensory, health and social benefits of eating meat. Someone starts to tithe to charity when their need for moral consistency starts to over-rule their need for an extra 10% of their income.
So you can always do the calculations about why someone did something, and take it back to their self-interest, and what strategies they're using to achieve that self-interest. Utilitarianism is just the strategy of adopting self-external goals as a way to meet your need for some self-image or guilt-reassurance. But it's powerful because it's difficult to fake: if you adopt this goal of making the world a better place, you can then start calculating.
There are some people who see the fact that this is all derivable from self-interest, and think that it means it isn't moral. They say "well okay, you just have these needs that make you do x, y or z, and those things just happen to help other people. You're still being selfish!".
This is just arguing about the meaning of "moral", and defining it in a way that I believe is actually impossible. What matters is that the people are helped. What matters is the actual outcomes of your actions. If someone doesn't care what happens to other people at all, they are amoral. If someone cares only enough to give $2 to a backpacker in a koala suit once every six months, they are a very little bit moral. Someone who cares enough to sincerely try to solve problems and gets things done is very moral. What matters is what's likely to happen.
I can't interpret your post as a reply to my post. Did you perhaps mean to post it somewhere else?
My fundamental question was, how is a desire to help others fundamentally different from a desire to eat pizza?
You seem to be defining a broken version of the zero ethical system that arbitrarily disregards the former. That's a strawman.
If you want to say that the zero ethical system is broken, you have to say that something breaks when people try to enact their desires, including the desires to help others.
So... (read more)