Lukas_Gloor comments on Moral Anti-Epistemology - Less Wrong

0 Post author: Lukas_Gloor 24 April 2015 03:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: OrphanWilde 01 May 2015 07:29:47PM 3 points [-]

You're making a mistake, in assuming that ethical systems are intended to do what you think they're intended to do. I'm going to make some complete unsubstantiated claims; you can evaluate them for yourself.

Point 1: The ethical systems aren't designed to be followed by the people you're talking to.

Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That's not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.

Point 2: The ethical systems aren't just there to be followed, they're there to see who follows them.

People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; "normal" people don't -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can't know any better, is going to be built into these ethical systems. It will generally take some form akin to "altruism", and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.

Point 3: The ethical systems are designed to be flexible

People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don't permit considerable flexibility in dealing with these situations aren't useful. But because of sociopaths, who still need ethical systems to be kept in line, you can't just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they're trying to follow all the rules all the time.

Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys

Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous "ethical system"; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.

See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody's understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn't considered? They're just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.

If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.

Comment author: Lukas_Gloor 02 May 2015 10:53:52AM *  0 points [-]

Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That's what I''d call "doing ethics", but you're completely correct that many people do something very different. Now, would they keep doing what they're doing if they knew exactly what they're doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?

I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn't mean that they wouldn't rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term "moral anti-epistemology" has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).

Comment author: OrphanWilde 02 May 2015 05:57:10PM 1 point [-]

And if making people more informed in this manner makes them worse off?

Comment author: Lukas_Gloor 02 May 2015 11:29:35PM *  0 points [-]

The sad thing is it probably will (the rationalist's burden: aspiring to be more rational makes rationalizating harder, and you can't just tweak your moral map and your map of the just world/universe to fit your desired (self-)image).

What is it that counts, revealed preferences or stated preferences or preferences that are somehow idealized (if the person knew more, was smarter etc.)? I'm not sure the last option can be pinned down in a non-arbitrary way. This would leave us with revealed preferences and stated preferences, even though stated preferences are often contradictory or incomplete. It would be confused to think that one type of preferences are correct, whereas the others aren't. There are simply different things going on, and you may choose to focus on one or the other. Personally I don't intrinsically care about making people more agenty, but I care about it instrumentally, because it turns out that making people more agenty often increases their (revealed) concern for reducing suffering.

What does this make of the claim under discussion, that deontology could sometimes/often be a form of moral rationalizing? The point still stands, but it is qualified with a caveat, namely that it is only a rationalizing if we are talking about (informed/complete) stated preferences. For whatever that's worth. On LW, I assume it is worth a lot to most people, but there's no mistake being made if it isn't for someone.