Part of the sequence: No-Nonsense Metaethics
A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.
Metaethics has been my target for a while now, but first I had to explain the neuroscience of pleasure and desire, and how to use intuitions for philosophy.
Luckily, Eliezer laid most of the groundwork when he explained couldness, terminal and instrumental values, the complexity of human desire and happiness, how to dissolve philosophical problems, how to taboo words and replace them with their substance, how to avoid definitional disputes, how to carve reality at its joints with our words, how an algorithm feels from the inside, the mind projection fallacy, how probability is in the mind, reductionism, determinism, free will, evolutionary psychology, how to grasp slippery things, and what you would do without morality.
Of course, Eliezer wrote his own metaethics sequence. Eliezer and I seem to have similar views on morality, but I'll be approaching the subject from a different angle, I'll be phrasing my solution differently, and I'll be covering a different spread of topics.
Why do I think much of metaethics can be solved now? We have enormous resources not available just a few years ago. The neuroscience of pleasure and desire didn't exist two decades ago. (Well, we thought dopamine was 'the pleasure chemical', but we were wrong.) Detailed models of reductionistic meta-ethics weren't developed until the 1980s and 90s (by Peter Railton and Frank Jackson). Reductionism has been around for a while, but there are few philosophers who relentlessly play Rationalist's Taboo. Eliezer didn't write How an Algorithm Feels from the Inside until 2008.
Our methods will be familiar ones, already used to dissolve problems ranging from free will to disease. We will play Taboo with our terms, reducing philosophical questions into scientific ones. Then we will examine the cognitive algorithms that make it feel like open questions remain.
Along the way, we will solve or dissolve the traditional problems of metaethics: moral epistemology, the role of moral intuition, the is-ought gap, matters of moral psychology, the open question argument, moral realism vs. moral anti-realism, moral cognitivism vs. non-cognitivism, and more.
You might respond, "Sure, Luke, we can do the reduce-to-algorithm thing with free will or disease, but morality is different. Morality is fundamentally normative. You can't just dissolve moral questions with Taboo-playing and reductionism and cognitive science."
Well, we're going to examine the cognitive algorithms that generate that intuition, too.
And at the end, we will see what this all means for the problem of Friendly AI.
I must note that I didn't exactly invent the position I'll be defending. After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics, I've just never thought it through in so much detail and cited so much of the relevant science [e.g. recent work in neuroeconomics and the science of intuition]."
But for convenience I do need to invent a name for my theory of metaethics. I call it pluralistic moral reductionism.
Next post: What is Metaethics?
I have a bad feeling about this.
Can you unpack the feeling to get more detail about what you intuit the problem is?