AnnaSalamon comments on If reductionism is the hammer, what nails are out there? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (46)
That depends a lot on what you mean by "moral relativism". Certainly rationality and reductionism need not imply taking morality less seriously. I liked what Eliezer's Harry Potter had to say on the subject:
If you haven't looked at it already, you might like Eliezer's sequence on metaethics, which talks about how one can notice that our concerns are generated by our brains, and that one could design brains with different concerns, while still taking morality seriously.
I'm glad someone noticed that. It was NOT EASY to compress that entire metaethical debate down into two paragraphs of text that wouldn't distract from the main story.
It sounds a lot as if you are suggesting that there is some essence to morality which transcends "concerns are generated by our brains".
"still taking morality seriously" modifies "one [who can...]", not "brains with different concerns".
I'm not sure why you came to think there was some confusion on this point, so I will not presume to suggest where you went wrong in your reading.
One attractor in the space of moral systems that doesn't have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
I read some of it, and after you mentioning it, I read some more. E.g. The Bedrock of Fairness touches on the issue of whether or there is this moral 'essence'. Also, I liked Paul Graham's What you can's say, which discusses the way morals change.
Overall, I think the closest thing that comes to a 'moral essence' is that the set of moral intuitions (no matter how vaguely defined) is the best thing that evolutionary processes have been able to come up with. Hume's is-ought problem does not really apply because there is no real ought.
The set of morals we ended up with is probably best summarized with the Golden Rule, which is a useful illusion in the same way that free will is, and similarly, for all practical purpose we can treat it as if it were real.
[ It's an interesting though experiment to consider whether there could be other, radically different sets of morals that would lead to the same or better evolutionary fitness, while still being 'evolutionary feasible'. ]