One of the criteria moral philosophers use to asses the credibility and power of a moral theory is "applicability". That is, how easy is it for humans to implement a moral rule? For example, a rule exists like "donate 23 hours a day to charity" it would be impossible for humans to fulfill the goal.

This lead me to start thinking about whether we want to be able to to pursue "the moral theoretical truth" should such a truth exist, or if we want to find the most applicable and practical set of rules, such that reasonably intramentaly rational (human) agents could figure out what is best in any given situation.

I feel like this is sort of like a map-territory distinction in a loose way. For example, the best thing to do in situation X might be A. A may be so difficult or require so much sacrifice, that B might be preferable, even if the overall outcome is not as good. This reminds me of how Eliezer says that the map is not the territory, but you can't fold the territory and put it in your pocket. 

I'd love to be able to understand this issue a little better. If anyone has any thoughts, ideas or evidence, I'd appreciate hearing them.

Thanks,

Jeremy

New Comment
22 comments, sorted by Click to highlight new comments since:

Any moral system that tells you to devote 23 hours a day to any one activity isn't so much inapplicable as wrong. Consequential morality at least must incorporate strategy.

As Hitchens would say, born sick, but commanded to be well.

If your morality is telling you to do something it's physically impossible for you to do, tell it that it has been over ruled by reality, and should try again.

Ought implies can.

-Kant's Law

I guess what I meant is, what happens if what is right is not doable. This has been addressed below though. Thank you!

Whether something is doable is irrelevant when it comes to determining whether it is right.

A separate question is what should we do, which is different from what is right. We should definitely do the most right thing we possibly can, but just because we can't do something does not mean that it is any less right.

A real example: There's nothing we can realistically do to stop much of the suffering undergone by wild animals through the predatory instinct. Yet the suffering of prey is very real and has ethical implications. Here we see something which has moral standing even though there appears to be nothing we can do to help the situation (beyond some trivial amount).

[-]Shmi50

Maybe consider the relationship between consequentialism (theory) and deontology (practice): the rules of the latter can be considered pre-calculated shortcuts to the former. For example, "do not kill" and other commandments are widely applicable shortcuts for most real-world consequentialst calculations, though they obviously fail in some cases. An example from religious ethics: you ought to donate some of your income to charity (through church), but how much? A tithe (1/10) of your material and/or financial revenue is a rule that makes it workable in practice in many cases, without an undue burden.

Of course, with time the rules of deontological ethics tend to become "imperatives" due to lost purposes, and "practice" becomes "theory".

For example, the best thing to do in situation X might be A. A may be so difficult or require so much sacrifice, that B might be preferable, even if the overall outcome is not as good.

Maybe I'm reading this wrong, but it seems like A is the "commonsense" interpretation of what 'morality' means. I honestly don't know what you mean by B, though. If the overall outcome of B is not as good as A, in what way does it make sense to say we should prefer B?

Further, plenty of contemporary Moral Philosophers deny that "applicability" (I believe the phil-jargon word is "demandingness") has any relevance to morality. See Singer, or better yet, Shelly Kagan's book The Limits of Morality for a more in-depth discussion of this.

I'll make it more explicit with an example: here is a possible moral declaration: "give all your free time to charity". Here is another: "you ought to provide your friend's child with a university education if your friend cannot afford it, but you can, (barely)".

These seem very harsh. Lets consider two scenarios: 1) you can do it, but it would leave you very unhappy and financially or mentally impoverished.

2) you cannot do it, because such demands taken to the logical conclusion results in awful outcomes for you.

If 1, then I suppose that should be considered in the calculation, and so my question is irrelevant to consequentialism.

If 2, then it seems like the best action is impossible. By "B" I meant the second best action, say giving some time to charity, or donating some books to your friend's child.

Do we want to promote a theory that says "the very best thing is right, everything else is wrong", or "the best thing that 'makes sense' is still considered good, even if, were it possible, another action would be better"?

I realize that 'makes sense' carries a ton of baggage and is very vague. I'm having some difficulty articulating my self.

As for applicability, thanks, I will look at those.

Ah, I see. I'm pretty sure you've run up against the "ought implies can" issue, not the issue of demandingness. IIRC, this is a contested principle, but I don't really know much about it other than Kant originally endorsing it. I think the first part of Larks' answer gives you a good idea of what consequentialists would say in response to this issue.

[-][anonymous]10

Do we want to promote a theory that says "the very best thing is right, everything else is wrong",

No. That just means the better your imagination gets, the less you do.

Consequentialism solves all of this:

  1. Give each possible world a "goodness" or "awesomeness" or "rightness" number (utility)
  2. Figure out the probability distribution over possible outcomes of each action you could take.
  3. Choose the action that has highest mean awesomeness.

If something is impossible, it won't be reachable from the action set and therefore won't come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.

In practice, you don't actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like "don't throw away money" and "don't kill people" and "feed yourself" (Notice how the rules are justified by appealing to their expected consequences, though).

Thank you. As I understand it, "Consequentialism" means the idea that you should optimize outcomes.... It is a theory of right action. It requires a theory of "goodness" to go along with it. So, you're saying that "awesomeness" or "utility" is what is to be measured or approximated. Is that utilitarianism?

[-][anonymous]00

So, you're saying that "awesomeness" or "utility" is what is to be measured or approximated. Is that utilitarianism?

No.

There are two different concepts that "utility" refers to. VNM utility is "that for which the calculus of expectation is legitimate". ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.

Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people's individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that's just me.

I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my "morality is awesome" post for a half-baked but practially useful solution to that problem.

Got it. Much appreciated.

[-][anonymous]10

No problem. Glad to have someone curious asking questions and tryign to learn!

Consequentialism is a method for choosing an action from the set of possible actions. If "the best action is impossible" it shouldn't have been in the option set in the first place.

However, I think you might like to look into scalar consequentialism.

Plain Scalar Consequentialism: Of any two things a person might do at any given moment, one is better than another to the extent that the its overall consequences are better than the other’s overall consequences.

Thank you!

This lead me to start thinking about whether we want to be able to to pursue "the moral theoretical truth" should such a truth exist, or if we want to find the most applicable and practical set of rules, such that reasonably intramentaly rational (human) agents could figure out what is best in any given situation.

Both? The latter needs to be judged by how closely it approximates the former. There are lots of moral rules that are easy to implement but not useful, e.g. "don't do anything ever." There's a tradeoff that needs to be navigated between ease of implementation and accuracy of approximation to the Real Thing.

So, figure out the theoretical correct action, and then approximate it to the best of your ability?

If you figured out the theoretically correct action, you wouldn't need to approximate it. I mean figure out the theoretically correct moral theory, then approximate it to the best of your ability. You're not approximating the output of an algorithm, you're approximating an algorithm (e.g. because the correct algorithm requires too much data, or time, or rationality...).

That's a great way of saying it. Thanks a lot!

Are you -intending- to deconstruct rule utilitarianism back into act utilitarianism here, or is that just me misunderstanding what you're getting at?

ETA: I think it's just me. Retracting.

[This comment is no longer endorsed by its author]Reply

I certainly do not think that is what I was doing. Really, I guess I want to understand the kind of normative theories people on here think are correc (and why), under a specific criterion of assessment. I think many people will take a consequentialist perspective on this site, (tentatively, I do too, but I am not confident in my convictions yet).

On a more meta-ethical level, I'm wondering how important the criterion of applicability is to a moral theory (for real humans, now.) I'm more interested in understanding the question "how should we act, and how do we know?" rather than "what is the best theoretical action?". (Of course, I may be begging the question assuming there is a difference between the two.)