Every now and then, there are discussions or comments on LW where people talk about finding a "correct" morality, or where they argue that some particular morality is "mistaken". (Two recent examples: [1] [2]) Now I would understand that in an FAI context, where we want to find such a specification for an AI that it won't do something that all humans would find terrible, but that's generally not the context of those discussions. Outside such a context, it sounds like people were presuming the existence of an objective morality, but I thought that folks on LW rejected that. What's up with that?
People are often wrong about what their preferences are + most humans have roughly similar moral hardware. Not identical, but close enough to behave as if we all share a common moral instinct.
When you make someone an argument and they change their mind on a moral issue, you haven't changed their underlying preferences...you've simply given them insight as to what their true preferences are.
For example, if a neurotypical human said that belief in God was the reason they don't go around looting and stealing, they'd be wrong about themselves as a matter of si...
This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.