This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics.
Part 1: Moral Anti-Epistemology
"Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they are good at preserving people's cherished beliefs about the world. But cherished beliefs don't just concern factual questions, they also very much concern moral issues. Therefore, we should expect there to be a lot of moral anti-epistemology.
Tradition as a moral argument, tu quoque, opposition to the use of thought experiments, the noncentral fallacy, slogans like "morality is from humans for humans" – all these are instances of the same general phenomenon. This is trivial and doesn't add much to the already well-known fact that humans often rationalize, but it does add the memetic perspective: Moral rationalizations sometimes concern more than a singular instance, they can affect the entire way people reason about morality. And like with religion or pseudoscience in epistemology about factual claims, there could be entire memeplexes centered around moral anti-epistemology.
A complication is that metaethics is complicated; it is unclear what exactly moral reasoning is, and whether everyone is trying to do the same thing when they engage in what they think of as moral reasoning. Labelling something "moral anti-epistemology" would suggest that there is a correct way to think about morality. Is there? As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal. And this is where moral anti-epistemology might cause troubles.
Are there reasons to assume that certain popular ethical beliefs are a result of moral anti-epistemology? Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), but what is it about deontology that relies on "faulty moral reasoning", if indeed there is something about it that does? How much of it relies on the noncentral fallacy, for instance? Is Yvain's personal opinion that "much of deontology is just an attempt to formalize and justify this fallacy" correct? The perspective of moral anti-epistemology would suggest that it is the other way around: Deontology might be the by-product of people applying the noncentral fallacy, which is done because it helps protect cherished beliefs. Which beliefs would that be? Perhaps the strongly felt intuition that "Some things are JUST WRONG?", which doesn't handle fuzzy concepts/boundaries well and therefore has to be combined with a dogmatic approach. It sounds somewhat plausible, but also really speculative.
Part 2: Memetics
A lot of people are skeptical towards these memetical just-so stories. They argue that the points made are either too trivial, or too speculative. I have the intuition that a memetic perspective often helps clarify things, and my thoughts about applying the concept of anti-epistemology to ethics seemed like an insight, but I have a hard time coming up with how my expectations about the world have changed because of it. What, if anything, is the value of the idea I just presented? Can I now form a prediction to test whether deontologists want to primarily formalize and justify the noncentral fallacy, or whether they instead want to justify something else by making use of the noncentral fallacy?
Anti-epistemology is a more general model of what is going on in the world than rationalizations are, so it should all reduce to rationalizations in the end. So it shouldn't be worrying that I don't magically find more stuff. Perhaps my expectations were too high and I should be content with having found a way to categorize moral rationalizations, the knowledge of which will make me slightly quicker at spotting or predicting them.
Thoughts?
I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)
There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.
I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.
These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.
It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).
It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".
Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.
The same is true of most discussions of consequentialism and utility functions.