What would it mean to share a concept of morality or normativity, or more generally, any concept? If I think of gold as "atomic number 79" and my Aunt Joan thinks of it as "the shiny yellow heavy valuable stuff in certain pieces of jewelry" do we fail to share a concept of gold? If such divergence counts as failure to share the concept, would failure to share concepts of morality be important to metaethics? (On this last question I'm thinking: not so much.)
Yeah, I'm not sure exactly what Wei Dai and Vladimir Nesov have in mind when they talk about a shared concept of 'ought' or of 'right'. Will Sawin talks about humans having a cognitive module devoted to the processing of 'ought', which I also find implausible given the last 30 years of psychology and neuroscience. I think I have a different view (than Dai, Nesov, and Sawin) of what concepts are and how they are likely to work, but I'd have to put serious time into a post to explain this clearly, I think. For the moment, those who are interested in the subj...
Recently a friend of mine told me that he and a few others were debating how likely it is that I've 'solved metaethics.' Others on this site have gotten the impression that I'm claiming to have made a fundamental breakthrough that I'm currently keeping a secret, and that's what my metaethics sequence is leading up to. Alas, it isn't the case. The first post in my sequence began:
The part I consider 'solved' is the part discussed in Conceptual Analysis and Moral Theory and Pluralistic Moral Reductionism. These posts represent an application of the lessons learned from Eliezer's free will sequence and his words sequence to the subject of metaethics.
I did this because Eliezer mostly skipped this step in his metaethics sequence, perhaps assuming that readers had already applied these lessons to metaethics to solve the easy problems of metaethics, so he could skip right to discussing the harder problems of metaethics. But I think this move was a source of confusion for many LWers, so I wanted to go back and work through the details of what it looks like to solve the easy parts of metaethics with lessons learned from Eliezer's sequences.
The next part of my metaethics sequence will be devoted to "bringing us all up to speed" on several lines of research that seem relevant to solving open problems in metaethics: the literature on how human values work (in brain and behavior), the literature on extracting preferences from what human brains actually do, and the literature on value extrapolation algorithms. For the most part, these literature sets haven't been discussed on Less Wrong despite their apparent relevance to metaethics, so I'm trying to share them with LW myself (e.g. A Crash Course in the Neuroscience of Human Motivation).
Technically, most of these posts will not be listed as being part of my metaethics sequence, but I will refer to them from posts that are technically part of my metaethics sequence, drawing lessons for metaethics from them.
After "bringing us all up to speed" on these topics and perhaps a couple others, I'll use my metaethics sequence to clarify the open problems in metaethics and suggest some places we can hack away at and perhaps make progress. Thus, my metaethics sequence aims to end with something like a Polymath Project set up for collaboratively solving metaethics problems.
I hope this clarifies my intentions for my metaethics sequence.