Recently a friend of mine told me that he and a few others were debating how likely it is that I've 'solved metaethics.' Others on this site have gotten the impression that I'm claiming to have made a fundamental breakthrough that I'm currently keeping a secret, and that's what my metaethics sequence is leading up to. Alas, it isn't the case. The first post in my sequence began:
A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.
The part I consider 'solved' is the part discussed in Conceptual Analysis and Moral Theory and Pluralistic Moral Reductionism. These posts represent an application of the lessons learned from Eliezer's free will sequence and his words sequence to the subject of metaethics.
I did this because Eliezer mostly skipped this step in his metaethics sequence, perhaps assuming that readers had already applied these lessons to metaethics to solve the easy problems of metaethics, so he could skip right to discussing the harder problems of metaethics. But I think this move was a source of confusion for many LWers, so I wanted to go back and work through the details of what it looks like to solve the easy parts of metaethics with lessons learned from Eliezer's sequences.
The next part of my metaethics sequence will be devoted to "bringing us all up to speed" on several lines of research that seem relevant to solving open problems in metaethics: the literature on how human values work (in brain and behavior), the literature on extracting preferences from what human brains actually do, and the literature on value extrapolation algorithms. For the most part, these literature sets haven't been discussed on Less Wrong despite their apparent relevance to metaethics, so I'm trying to share them with LW myself (e.g. A Crash Course in the Neuroscience of Human Motivation).
Technically, most of these posts will not be listed as being part of my metaethics sequence, but I will refer to them from posts that are technically part of my metaethics sequence, drawing lessons for metaethics from them.
After "bringing us all up to speed" on these topics and perhaps a couple others, I'll use my metaethics sequence to clarify the open problems in metaethics and suggest some places we can hack away at and perhaps make progress. Thus, my metaethics sequence aims to end with something like a Polymath Project set up for collaboratively solving metaethics problems.
I hope this clarifies my intentions for my metaethics sequence.
I'll comment on the quotes Wei selected (this isn't meant to be related to anything else here, just isolated reaction to Wei's drawing attention to these things):
It's easy to construct all sorts of interpretations that could be said to be referents of anything else. The question is not well-defined on the level where we talk about "referring" and not including more powerful means of constraining what kinds of "referring" are relevant. Correspondingly, "failing to refer" only makes sense relative to a method of interpretation, and in the case of normative value, discovering correct method of interpretation (relevance-guidance) is more or less the same problem as discovering the referents.
We might use moral terms in a variety of ways, but maybe still we should use them in One True Way, in which case there is still One True Theory that describes it.