Would the existence of endorsers of moral error theory be evidence against humans having an ought-function? Most of the ones I am familiar with seem both roughly neurotypical and honest in their writings.
What experiences should we anticipate in if humans have this hypothetical module? And if humans do not?
I've written several posts on the cognitive science of concepts in prep for later posts in my metaethics sequence. Why?
I'm doing this because one common way of trying to solve the "friendliness content" problem in Friendly AI theory is to analyze (via thought experiment and via cognitive science) our concept of "good" or "ought" or "right" so that we can figure out what an FAI "ought" to do, or what it would be "good" for an FAI to do, or what it would be "right" for an FAI to do.
That's what Eliezer does in The Meaning of Right, that's what many other LWers do, and that's what most mainstream metaethicists do.
With my recent posts on the cognitive science of concepts, I'm trying to show that cognitive science presents a number of difficult problems for this approach.
Let me illustrate with a concrete example. Math prodigy Will Sawin once proposed to me (over the phone) that our concept of "ought" might be realized by way of something like a dedicated cognitive module. In an earlier comment, I tried to paraphrase his idea:
The reason I'm investigating the cognitive science of concepts is because I think it shows that the claims about the human brain in these last two paragraphs are probably false, and so are many other claims about human brains that are implicit in certain varieties of the 'conceptual analysis' approach to value theory.