Vladimir_Nesov comments on What is Metaethics? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (550)
For the record, I think in this thread Eugine_Nier follows a useful kind of "simple truth", not making errors as a result, while some of the opponents demand sophistication in lieu of correctness.
I think we're demanding clarity and substance, not sophistication. Honestly I feel like one of the major issues with moral discussions is that huge sophisticated arguments can emerge without any connection to substantive reality.
I would really appreciate it if someone would taboo the words "moral", "good", "evil", "right", "wrong", "should", etc. and try to make the point using simpler concepts that have less baggage and ambiguity.
Clarity can be difficult. What do you mean by "truth"?
I mean it in precisely the sense that The Simple Truth does. Anticipation control.
That's not the point. You must use your heuristics even if you don't know how they work, and avoid demanding to know how they work or how they should work as a prerequisite to being allowed to use them. Before developing technical ideas about what it means for something to be true, or what it means for something to be right, you need to allow yourself to recognize when something is true, or is right.
I'm sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I'd use my feelings of right or wrong as much as the next guy, but that doesn't make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn't make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
Just make sure that the particular piece of knowledge you demand is indeed available, and not, say, just the thing you are trying to figure out.
(Nod)
I still think it's a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Morality doesn't work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
I think we should move this conversation back out of the other post, where it really doesn't belong.
Can you clarify what you mean by this?
For what X are you saying "All agents that satisfy X must follow morality."?