My current hypothesis is that most of the purpose of evolving morality is signaling that you are predictably non-defecting enough to deal with. This is not very well worked out - but it does predict that if you take it to edge cases, or build syllogisms from stated moral beliefs, or other such overextension, it'll just get weird (because the core is to project that you are a non-defecting player - that's the only bit that gets tested against the world), and I think observation shows plenty of this (e.g. 1, 2).
Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality. ...
I think - and have, for as long as I can remember - that morality is about doing the right thing. But this is not what most people think morality is about!
And more in this vein. I really dislike this post. The author proclaims that he is shocked, shocked that other people are wrong, even though he himself is right. Then he proceeds to analyze why almost everyone else got it wrong, without once trying to justify his own position using any argument other than professed astonishment that any thinking person could disagree.
Downvoted.
There was a case in my local area where a teenager beat another teeanger to death with a bat. On another blog, some commenters were saying that since his brain wasn't fully developed yet (based on full brain development being attained close to 30), he shouldn't be held to adult standards (namely sentencing standards). This was troubling to me, because while I don't advocate the cruelty of our current prison system, I do worry about the message that lax sentencing sends. The commenets seem to naturally allow for adult freedom (the kids were all unsupervised...
When you see free will being made a precondition for moral behavior, it means that the speaker is not concerned with doing the right thing. They are concerned with winning virtue points.
I think that "free will" can be understood as either itself an everyday concept, or else a philosopher's way of talking about and possibly distorting an everyday concept. The term has two components which we can talk about separately.
A "willed" act is a deliberate act, done consciously, intentionally. It is consciously chosen from among other possib...
Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types. There are three domains in which we can ask moral questions: outcomes, actions, and agents. Whether actions are moral is about doing the right thing, as we originally thought. Whether a person or agent is moral, on the other hand, is a prediction of whether that agent will make moral decisions in the future.
An immoral decision is evidence that the agent who made it is immoral. However, there are some things that might screen off thi...
Ultimately, your claim appears to be, "The punitive part of morality is inappropriate. It is based on free will. Therefore, free will is irrelevant to morality." I admit you don't phrase it that way, but with your only concern being lack of literal coercion and likelihood of reoffense, your sense of morality seems to be inconsistent with people's actual beliefs.
You will find very few people who will say that a soldier acting in response to PTSD deserves the exact same sentence as a sociopath acting out of a sadistic desire to kill, even if each i...
Most people think that being moral means acting in a way that will earn you credit with God.
That, but I think there's some reciprocal after-effects that also come into play. What I mean is that when you view what being moral implies with respect to one's religion, you get what you suggested -- being moral entails an increase in heaven (or whatever) being likely.
A very interesting effect I've noticed going the other way, is that religion lets you discuss morality in far, far, far more "lofty" terms that what a non-theistic individual might come...
Having an inoperable brain condition would not affect how we used a person's actions to predict whether they were likely to do similar things in the future
I've always viewed there as being a third theory of morality: People who do bad things, are more likely to do other bad things. If my friend lies to me, they're more likely to lie to me in the future. But they're also more likely to steal from me, assault me, etc..
A brain defect (such as compulsive lying) therefor needs to be accounted for - the person is likely to commit domain-specific actions like ...
I'd just like to point out a little flaw in your construction of other people's morality, and offer what I think is a better model for understanding this issue.
First, I wouldn't say that people have a morality that agrees with God. They have a God that agrees with their morality. Reading bible passages to people is unlikely to wobble their moral compass; they'll just say those no longer apply or you're taking them too literally or some such. God isn't so much of a source of morality as a post hoc rationalization of a deeper impulse.
Second, this whole syste...
Without Kant's "nonsensical" detour through rationality, you don't understand his position at all. There is no particular agreement on what "free will" means, and Kant chose to stick fairly closely to one particular line of thought on the subject. He maintained that you're only really free when you act rationally, which means that you're only really free when you do the right thing. Kant also held that a being with the capacity for rationality should be treated as if free even if you had little reason to think they were being rationa...
As I see it, there are:
I have used one taboo word here: "best". But we'll assume everyone at least broadly agrees on its definition (least death and pain, most fun etc).
People can then start applying other taboo-able words, which may arbitrarily apply to one of the above concepts, or to a confused mixture of them.
Right on. Free will is nonsense but morality is important. I see moral questions as questions that do not have a clear cut answer that can be found be consulting some rules (religious or not). We have to figure out what is the right thing to do. And we will be judged by how well we do it.
I believe the "free will" thing is because without it, you could talk about whether or not a rock is moral. You could just say whether or not the universe is moral.
I consider morality to be an aspect of the universe (a universe with happier people is better, even if nobody's responsible), so I don't see any importance of free will.
A cognitive agent with intentions sounds like it's at least in the same conceptual neighborhood as free will. Perhaps free will has roughly the same role in their models of moral action as intentions do in your model.
If a tornado kills someone we don't say that it acted immorally but if a man does we do (typically). What's the difference between the man and the tornado? While the tornado was just a force of nature, it seems like there's some sense in which the man was an active agent, some way in which the man (unlike the tornado) had control of his act...
That's because morality is a property of a cognitive agent, not a holistic property of the agent and its environment.
I don't understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.
If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit ...
So you (or perhaps some extrapolated version of you) would say that a thermostat in a human's house set to 65 degrees F is moral, because it does the right thing, while a thermostat set to 115 is immoral because it does the wrong thing. Meanwhile one of those free will people would say that a thermostat is neither moral nor immoral, it is just a thermostat.
The main difference seems to be the importance of "moral responsibility," which, yes, is mixed up with god, but more importantly is a key part of human emotions, mostly emotions dealing with p...
If you want an alternative to the word 'morality' that means what you want 'morality' to mean, I have found good results using the phrase "right-and-wrong-ness".
Do note that this often takes a turn through intuitionism, and it can be hard to drag less-clear thinkers out of that mire.
While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won't defect or (b) being good to earn "points", neither of these definitions feel right to me.
I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn't seem to reflect how it works.
I feel that (b) seems to miss the point: we are...
I think I'm on the same page with you re kant. Tell me if I've understood the other ideas you're advancing in this post:
The problem of understanding morality just is the problem of understanding which actions are moral.
An action is moral only if (but not if and only if) it was intended to be moral.
Did I miss the point?
But computers, machines, even thermostats, can have intentions ascribed to them
Can you spell out what you mean by this? Are intentions something a thermostat has intrinsically, or something that I can ascribe to it?
Reading Kant (okay, mostly reading about Kant), it seemed to me that he was not even interested in the question, "What is the right thing to do?" What he was interested in was really, "How can I get into heaven?"
I don't think Kant thought about getting to the afterlife. My impression of Kant is that he was essentially agnostic about both God and the afterlife (although he considered them to be a very interrelated pair of questions) but thought it was healthier for individuals and society to believe in them.
Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the "moral" question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing.
"Moral" and "legal" mean different things anyway. It makes sense that someone did the legally wrong thing, but were not culpa...
It seems like the right perspective to think about things goes something like this:
Facts about the world can be good or bad. It is good, for instance, when people are happy and healthy, and bad when they are not.
These are all pretty much equally bad, ...
Separating concepts is itself a moral action. Moral actions should relate to moral agents. Most of the moral agents who use these concepts aren't here on lesswrong. They include the kind of people who hear ''free will is an illusion'' from a subjectively credible source and mope around for the rest of their lives.
"What happens then when agents’ self-efficacy is undermined? It is not that their basic desires and drives are defeated. It is rather, I suggest, that they become skeptical that they can control those desires; and in the face of that skeptici...
to me, morality means not disastrously/majorly subverting another's utility function for a trivial increase in my own utility.
edit: wish the downvoters would give me some concrete objections.
[I made significant edits when moving this to the main page - so if you read it in Discussion, it's different now. It's clearer about the distinction between two different meanings of "free", and why linking one meaning of "free" with morality implies a focus on an otherworldly soul.]
It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice. If you read the post carefully, nothing in it advocates outcome-based justice. I only wanted to show how people think, so I could write this post.
Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality. At best, they have just one word that composes two different concepts. At worst, their "morality" doesn't contain any new primitive concepts at all; it's just a macro: a shorthand for a combination of other ideas.
I think - and have, for as long as I can remember - that morality is about doing the right thing. But this is not what most people think morality is about!
Free will and morality
Kant argued that the existence of morality implies the existence of free will. Roughly: If you don't have free will, you can't be moral, because you can't be responsible for your actions.1
The Stanford Encyclopedia of Philosophy says: "Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one's action." ("Free will" in this context refers to a mysterious philosophical phenomenological concept related to consciousness - not to whether someone pointed a gun at the agent's head.)
I was thrown for a loop when I first came across people saying that morality has something to do with free will. If morality is about doing the right thing, then free will has nothing to do with it. Yet we find Kant, and others, going on about how choices can be moral only if they are free.
The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop. Committing a crime is, generally, regarded as immoral. (I am not claiming that it is immoral. I'm talking descriptively about general beliefs.) Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the "moral" question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.
The only way this can make sense, is if morality does not mean doing the right thing. I need the term "morality" to mean a set of values, so that I can talk to people about values without confusing both of us. But Kant and company say that, without free will, implementing a set of values is not moral behavior. For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it). So what is this morality thing?
Don't judge my body - judge my soul
My theory #1: Most people think that being moral means acting in a way that will earn you credit with God.
When theory #1 holds, "being moral" is shorthand for "acting in your own long-term self-interest". Which is pretty much the opposite of what we usually pretend being moral means.
My less-catchy but more-general theory #2, which includes #1 as a special case: Most people conceive of morality in a way that assumes soul-body duality. This also includes people who don't believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.
Moral behavior is intentional, but need not be free
Why we should separate the concepts of "morality" and "free will"
1. I am making the most-favorable re-interpretation. Kant's argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.
2. This is the preferred theory under, um, Goetz's Cognitive Razor: Prefer the explanation for someone's behavior that supposes the least internal complexity of them.