byrnema comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (172)
...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.
It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.
What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should? Would your position hold that it is unlikely for them to have a different list or that they must be mistaken about the list -- that caring about what you "should" do means having the list we have?
How'd they end up with the same premises and different conclusions? Broken reasoning about implications, like the human practice of rationalization? Bad empirical pictures of the physical universe leading to poor policy? If so, that all sounds like a perfectly ordinary situation.
They care about doing what is morally right, but they have different values. The baby-eaters, for example, thought it was morally right to optimize whatever they were optimizing with eating the babies, but didn't particularly value their babies' well-being.
Er, you might have missed the ancestor of this thread. In the conflict between fundamentally different systems of preference and value (more different than those of any two humans), it's probably more confusing than helpful to use the word "should" with the other one. Thus we might introduce another word, should2, which stands in relation to the aliens' mental constitution (etc) as should stands to ours.
This distinction is very helpful, because we might (for example) conclude from our moral reasoning that we should respect their moral values, and then be surprised that they don't reciprocate, if we don't realize that that aspect of should needn't have any counterpart in should2. If you use the same word, you might waste time trying to argue that the aliens should do this or respect that, applying the kind of moral reasoning that is valid in extrapolating should; when they don't give a crap for what they should do, they're working out what they should2 do.
(This is more or less the same argument as in Moral Error and Moral Disagreement, I think.)
I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.
I think it's perfectly clear. Eliezer seems to disapprove of this usage and I think he claims that it is not clear, but I'm less sure of that.
I propose that a moral relativist is someone who like this usage.