Will_Sawin comments on A Defense of Naive Metaethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (294)
Do you accept the conclusion I draw from my version of this argument?
I agree with you up to this part:
I made the same argument (perhaps not very clearly) at http://lesswrong.com/lw/44i/another_argument_against_eliezers_metaethics/
But I'm confused by the rest of your argument, and don't understand what conclusion you're trying to draw apart from "CEV can't be the definition of morality". For example you say:
I don't understand why believing something to be important implies that it has a long definition.
Ah. So this is what I am saying.
If you say "I define should as [Eliezers long list of human values]"
then I say: "That's a long definition. How did you pick that definition?"
and you say: 'Well, I took whatever I thought was morally important, and put it into the definition."
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
"Using the long definition is obscuring what you really mean when you say 'should'. You really mean 'what's important', not [the long list of things I think are important]. So why not just define it as that?"
One more way to describe this idea. I ask, "What is morality?", and you say, "I don't know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this "brain" is talking about morality? It says it does, and it doesn't know of a better tool for that purpose presently available. By the way, it's reporting that <long list of conditions> are morally relevant, and is probably right."
Where do you get "is probably right" from? I don't think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain's moral judgment won't be stellar, but more reliable for simpler judgments. Here, I referred "morally relevant", which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
Beautiful. I would draw more attention to the "Why.... ? It says it does" bit, but that seems right.