Recently a friend of mine told me that he and a few others were debating how likely it is that I've 'solved metaethics.' Others on this site have gotten the impression that I'm claiming to have made a fundamental breakthrough that I'm currently keeping a secret, and that's what my metaethics sequence is leading up to. Alas, it isn't the case. The first post in my sequence began:
A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.
The part I consider 'solved' is the part discussed in Conceptual Analysis and Moral Theory and Pluralistic Moral Reductionism. These posts represent an application of the lessons learned from Eliezer's free will sequence and his words sequence to the subject of metaethics.
I did this because Eliezer mostly skipped this step in his metaethics sequence, perhaps assuming that readers had already applied these lessons to metaethics to solve the easy problems of metaethics, so he could skip right to discussing the harder problems of metaethics. But I think this move was a source of confusion for many LWers, so I wanted to go back and work through the details of what it looks like to solve the easy parts of metaethics with lessons learned from Eliezer's sequences.
The next part of my metaethics sequence will be devoted to "bringing us all up to speed" on several lines of research that seem relevant to solving open problems in metaethics: the literature on how human values work (in brain and behavior), the literature on extracting preferences from what human brains actually do, and the literature on value extrapolation algorithms. For the most part, these literature sets haven't been discussed on Less Wrong despite their apparent relevance to metaethics, so I'm trying to share them with LW myself (e.g. A Crash Course in the Neuroscience of Human Motivation).
Technically, most of these posts will not be listed as being part of my metaethics sequence, but I will refer to them from posts that are technically part of my metaethics sequence, drawing lessons for metaethics from them.
After "bringing us all up to speed" on these topics and perhaps a couple others, I'll use my metaethics sequence to clarify the open problems in metaethics and suggest some places we can hack away at and perhaps make progress. Thus, my metaethics sequence aims to end with something like a Polymath Project set up for collaboratively solving metaethics problems.
I hope this clarifies my intentions for my metaethics sequence.
Agreed.
Well, but I don't 'assume linguistic reductionism'. What I say is that if the intended meaning of 'ought' refers to structures in math and physics, then linguistic reductionism about normative language is correct, and if it doesn't, then normative language (using its intended meaning) fails to refer (assuming ontological reductionism is true).
Philosophers usually are, but not always. One thing I'm trying to avoid here is the 'sneaking in connotations' business performed by, in my example, Bill Craig.
No, I haven't, and I've tried to be clear about that. But perhaps I need to edit 'Pluralistic Moral Reductionism' with additional clarifications, if it still sounds like I think I've dissolved the question that people are really asking. What I've dissolved is some debates that I see some people engaged in.
Edit: Also, I should add that I'm fairly skeptical of the idea that humans share a concept of morality or normativity. I do intend to write something up on the psychology and neuroscience of mental representations and 'intuitive concepts' to explain why, but I've got several other projects stacked up with priority over that.
What would it mean to share a concept of morality or normativity, or more generally, any concept? If I think of gold as "atomic number 79" and my Aunt Joan thinks of it as "the shiny yellow heavy valuable stuff in certain pieces of jewelry" do we fail to share a concept of gold? If such divergence counts as failure to share the concept, would failure to share concepts of morality be important to metaethics? (On this last question I'm thinking: not so much.)