jimrandomh comments on Separate morality from free will - Less Wrong

6 Post author: PhilGoetz 10 April 2011 02:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread.

Comment author: jimrandomh 08 April 2011 12:13:30PM 5 points [-]

Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types. There are three domains in which we can ask moral questions: outcomes, actions, and agents. Whether actions are moral is about doing the right thing, as we originally thought. Whether a person or agent is moral, on the other hand, is a prediction of whether that agent will make moral decisions in the future.

An immoral decision is evidence that the agent who made it is immoral. However, there are some things that might screen off this evidence, which is what Kant was (confusedly) talking about. For example, if Dr. Evil points a mind-control ray at someone and makes them do evil things, and the mind control ray is then destroyed, then the things they did while under its influence have no bearing on whether they're a moral or immoral person, because they have no predictive value. On the other hand, if someone did something bad because the atoms in their brain were arranged in the wrong way, and their atoms are still arranged that way, that's evidence that they're immoral; but if they were to volunteer for a procedure that rearranges their brain such that they won't do bad things anymore, then after the procedure they'll be a moral person.

Strengthening a moral agent or weakening an immoral agent has positive outcome-utility. Good actions by an agent and good outcomes causally connected to an agent's actions are evidence that they're agent-moral, and conversely bad actions and bad outcomes causally connected to an agent's actions are evidence that they're agent-immoral. But these are only evidence; they are not agent-morality itself.

Comment author: PhilGoetz 08 April 2011 04:19:15PM *  1 point [-]

Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types.

They're not as different as the majority view makes them out to be. A moral agent is one that uses decision processes that systematically produce moral actions. Period. Whereas the majority view is that a moral agent is not one whose decision processes are structured to produce moral actions, but one who has a virtuous free will. A rational extension of this view would be to say that someone who has a decision process that consistently produces immoral actions can still be moral if their free will is very strong and very virtuous, and manages to counterbalance their decision process.

The example above about a mind control ray has to do with changing the locus of intentionality controlling a person. It doesn't have to do with the philosophical problem of free will. Does Dr. Evil have free will? It doesn't matter, for the purposes of determining whether his cognitive processes consistently produce immoral actions.

Comment author: jimrandomh 08 April 2011 04:39:57PM 1 point [-]

A moral agent is one that uses decision processes that systematically produce moral actions. Period.

It's more complicated than that, because agent-morality is a scale, not a boolean, and how morally a person acts depends on the circumstances they're placed in. So a judgment of how moral someone is must have some predictive aspect.

Suppose you have agents X and Y, and scenarios A and B. X will do good in scenario A but will do evil in scenario B, while Y will do the opposite. Now if I tell you that scenario A will happen, then you should conclude that X is a better person than Y; but if I instead tell you that scenario B will happen, then you should conclude that Y is a better person than X.

The example above about a mind control ray has to do with changing the locus of intentionality controlling a person.

I don't think "locus of intentionality" is the right way to think about this (except perhaps as a simplified model that reduces to conditioning on circumstances). In a society where mind control rays were common, but some people were immune, we would say that people who are immune are more moral than people who aren't. In the society we actually have, we say that those who refuse in the Milgram experiment are more moral, and that people who refuse to do evil under the threat of force are more moral, and I don't think a "locus of intentionality" model handles these cases cleanly.