While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won't defect or (b) being good to earn "points", neither of these definitions feel right to me.
I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn't seem to reflect how it works.
I feel that (b) seems to miss the point: we aren't good to pragmatically "get points" for something, with a use of a 'morality' term separate from pragmatism or cooperation, we're acknowledging that points are given based on something more subtle and complex than pragmatism or cooperation (e.g., 'God's preferences' is a handle). (I mean, we're good because we want to be, and 'getting points' is just a way of describing this. We wouldn't do anything unless it meant getting some kind of points, either real or abstract.)
I wrote down a hypothesis for morality a week ago and decided I would think about it later.
Nazgulnarsil wrote:
to me, morality means not disastrously/majorly subverting another's utility function for a trivial increase in my own utility.
I'm considering that moral means not subverting one's own utility function.
Humans seems to have a lot of plasticity in choosing what their values and what their values are about. We can think about things a certain way and develop perspectives that lead to values and actions that are extremely different from our initial ideas of what moral is. (For example, people presumably just like myself have torn children from their parents and sent them to starve in death camps.) It stands to reason we would need a strong internal protection system -- some system of checks and balances -- to keep our values intact.
Suppose we consider that we should always do whatever is pragmatically correct (pragmatic behavior includes altruistic, cooperative behavior) except when an action is suspected to subvert our utility function. I imagine that our utility function could be subverted if an action makes us feel hypocritical, and thus forces us to devalue a value that we had.
For example, we all value other people (especially particular people). But if we would kill someone for pragmatic reasons (that is, we have some set of reasons for wanting to do so that outweigh reasons for not wanting to), we can still decide we wouldn't kill them for this one other reason: we want to value not killing other people.
This is very subtle. Already, we do value not killing other people, but this has already been weighted in the decision and we still decide we would -- pragmatically -- commit the murder. But we realize that if we commit the murder for these pragmatic reasons,even though it seems for the best given our current utility function, we can no longer pretend that we value life so much and we may view a slippery slope where it will be easier to kill someone in the future because now we know this value isn't so strong.
If we do commit the murder anyway, because we are pragmatic rather than moral, then the role of guilt could be to realign and reset our values. "I killed him because I had to but I feel really bad about it; this means I really do value life."
So finally, morality could be about protecting values we have that aren't inherently stable.
[I made significant edits when moving this to the main page - so if you read it in Discussion, it's different now. It's clearer about the distinction between two different meanings of "free", and why linking one meaning of "free" with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their "morality" doesn't contain any new primitive concepts at all; it's just a macro: a shorthand for a combination of other ideas.
I think - and have, for as long as I can remember - that morality is about doing the right thing. But this is not what most people think morality is about!
Free will and morality
Kant argued that the existence of morality implies the existence of free will. Roughly: If you don't have free will, you can't be moral, because you can't be responsible for your actions.1
The Stanford Encyclopedia of Philosophy says: "Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one's action." ("Free will" in this context refers to a mysterious philosophical phenomenological concept related to consciousness - not to whether someone pointed a gun at the agent's head.)
I was thrown for a loop when I first came across people saying that morality has something to do with free will. If morality is about doing the right thing, then free will has nothing to do with it. Yet we find Kant, and others, going on about how choices can be moral only if they are free.
The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop. Committing a crime is, generally, regarded as immoral. (I am not claiming that it is immoral. I'm talking descriptively about general beliefs.) Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the "moral" question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing. I need the term "morality" to mean a set of values, so that I can talk to people about values without confusing both of us. But Kant and company say that, without free will, implementing a set of values is not moral behavior. For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it). So what is this morality thing?
Don't judge my body - judge my soul
My theory #1: Most people think that being moral means acting in a way that will earn you credit with God.
When theory #1 holds, "being moral" is shorthand for "acting in your own long-term self-interest". Which is pretty much the opposite of what we usually pretend being moral means.
My less-catchy but more-general theory #2, which includes #1 as a special case: Most people conceive of morality in a way that assumes soul-body duality. This also includes people who don't believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.
Moral behavior is intentional, but need not be free
Why we should separate the concepts of "morality" and "free will"
- It isn't parsimonious. It confuses the question of figuring out what values are good, and what behaviors are good, with the philosophical problem of free will. Each of these problems is difficult enough on its own!
- It is inconsistent with our other definitions. People map questions about what is right and wrong onto questions about morality. They will get garbage out of their thinking if that concept, internally, is about something different. They end up believing there are no objective morals - not necessarily because they've thought it through logically, but because their conflicting definitions make them incapable of coherent thought on the subject.
- It implies that morality is impossible without free will. Since a lot of people on LW don't believe in free will, they would conclude that they don't believe in morality if they subscribed to Kant's view.
- When questions of blame and credit take center stage, people lose the capacity to think about values. This is demonstrated by some Christians who talk a lot about morality, but assume, without even noticing they're doing it, that "moral" is a macro for "God said do this". They failed to notice that they had encoded two concepts into one word, and never got past the first concept.
For morality to be about oughtness, so that we are able to reason about values, we need to divorce it completely from free will. Free will is still an interesting and possibly important problem. But we shouldn't mix it in together with the already-difficult-enough problem of what actions and values are moral.1. I am making the most-favorable re-interpretation. Kant's argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.
2. This is the preferred theory under, um, Goetz's Cognitive Razor: Prefer the explanation for someone's behavior that supposes the least internal complexity of them.