This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics.
Part 1: Moral Anti-Epistemology
"Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they are good at preserving people's cherished beliefs about the world. But cherished beliefs don't just concern factual questions, they also very much concern moral issues. Therefore, we should expect there to be a lot of moral anti-epistemology.
Tradition as a moral argument, tu quoque, opposition to the use of thought experiments, the noncentral fallacy, slogans like "morality is from humans for humans" – all these are instances of the same general phenomenon. This is trivial and doesn't add much to the already well-known fact that humans often rationalize, but it does add the memetic perspective: Moral rationalizations sometimes concern more than a singular instance, they can affect the entire way people reason about morality. And like with religion or pseudoscience in epistemology about factual claims, there could be entire memeplexes centered around moral anti-epistemology.
A complication is that metaethics is complicated; it is unclear what exactly moral reasoning is, and whether everyone is trying to do the same thing when they engage in what they think of as moral reasoning. Labelling something "moral anti-epistemology" would suggest that there is a correct way to think about morality. Is there? As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal. And this is where moral anti-epistemology might cause troubles.
Are there reasons to assume that certain popular ethical beliefs are a result of moral anti-epistemology? Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), but what is it about deontology that relies on "faulty moral reasoning", if indeed there is something about it that does? How much of it relies on the noncentral fallacy, for instance? Is Yvain's personal opinion that "much of deontology is just an attempt to formalize and justify this fallacy" correct? The perspective of moral anti-epistemology would suggest that it is the other way around: Deontology might be the by-product of people applying the noncentral fallacy, which is done because it helps protect cherished beliefs. Which beliefs would that be? Perhaps the strongly felt intuition that "Some things are JUST WRONG?", which doesn't handle fuzzy concepts/boundaries well and therefore has to be combined with a dogmatic approach. It sounds somewhat plausible, but also really speculative.
Part 2: Memetics
A lot of people are skeptical towards these memetical just-so stories. They argue that the points made are either too trivial, or too speculative. I have the intuition that a memetic perspective often helps clarify things, and my thoughts about applying the concept of anti-epistemology to ethics seemed like an insight, but I have a hard time coming up with how my expectations about the world have changed because of it. What, if anything, is the value of the idea I just presented? Can I now form a prediction to test whether deontologists want to primarily formalize and justify the noncentral fallacy, or whether they instead want to justify something else by making use of the noncentral fallacy?
Anti-epistemology is a more general model of what is going on in the world than rationalizations are, so it should all reduce to rationalizations in the end. So it shouldn't be worrying that I don't magically find more stuff. Perhaps my expectations were too high and I should be content with having found a way to categorize moral rationalizations, the knowledge of which will make me slightly quicker at spotting or predicting them.
Thoughts?
You're making a mistake, in assuming that ethical systems are intended to do what you think they're intended to do. I'm going to make some complete unsubstantiated claims; you can evaluate them for yourself.
Point 1: The ethical systems aren't designed to be followed by the people you're talking to.
Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That's not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
Point 2: The ethical systems aren't just there to be followed, they're there to see who follows them.
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; "normal" people don't -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can't know any better, is going to be built into these ethical systems. It will generally take some form akin to "altruism", and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.
Point 3: The ethical systems are designed to be flexible
People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don't permit considerable flexibility in dealing with these situations aren't useful. But because of sociopaths, who still need ethical systems to be kept in line, you can't just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they're trying to follow all the rules all the time.
Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys
Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous "ethical system"; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.
See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody's understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn't considered? They're just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.
If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.
One sided. OTOH: An ethical system being a functional ethical system means its being able to resist too much system-gaming by individuals. Ethical systems have a social role. Communities that can't persuade any of their members to sacrifice themselves in defence of the community don't survive,
... (read more)