DanArmak comments on Reasonable Requirements of any Moral Theory - Less Wrong

-1 Post author: TheSurvivalMachine 10 October 2016 08:48PM

Comments (10)

You are viewing a single comment's thread.

Comment author: DanArmak 10 October 2016 09:54:20PM 0 points [-]

The author says a moral theory should:

  • "Cover how one should act in all situations" (instead of dealing only with 'moral' ones)
  • Contain no contradictions
  • "Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being"

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

Comment author: cunning_moralist 11 October 2016 07:18:30AM 1 point [-]

The author is far from alone in his view that both a complete rightness criterion and a consistent decision method must be required of all serious moral theories.

Among hedonistic utilitarians it's quite normal to demand both completeness, to include all (human) situations, and consistency, to avoid contradictions. The author simply describes what's normal among consequentialists, who, after all, are more or less the rational ones. ;-) There's one interesting exception though! The demand to include all situations, including the non-human ones, is radical, and quite hard a challenge for hedonistic utilitarians, who do have problems with the bloodthirsty predators of the jungle.

Comment author: UmamiSalami 11 October 2016 06:33:50PM 0 points [-]

Among hedonistic utilitarians it's quite normal to demand both completeness

Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.

Even if it is a complete theory, I don't think that completeness is demanded of the theory; rather it's merely a tenet of it. I can't think of any good a priori reasons to expect a theory to be complete in the first place.

Comment author: cunning_moralist 12 October 2016 12:11:58PM 1 point [-]

Two different actions don’t produce exactly the same utility, but even if they did it wouldn’t be any problem. To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”. Consequentialists want to maximize the intrinsic value, and both these actions do just that.

Of course hedonistic utilitarianism doesn’t require completeness, which, by the way, isn’t one of its tenets either. But since it is complete, which of course is better than being incomplete, it’s normal for hedonistic utilitarianists to hold the metaethical view that a proper moral theory should answer all of the question: “Which actions ought to be performed?” What could be so good with answering it incompletely?

Comment author: UmamiSalami 17 October 2016 11:40:27PM *  0 points [-]

To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.

Proves my point. That's no different from how most most moral theories respond to questions like "which shirt do I wear". So this 'completeness criterion' has to be made so weak as to be uninteresting.

Comment author: DanArmak 11 October 2016 11:32:50AM *  0 points [-]

I'm confused. Is it normal to regard all possible acts and decisions as morally significant, and to call a universal decision theory a moral theory?

What meaning does the word "moral" even have at that point?

Comment author: cunning_moralist 12 October 2016 12:14:23PM 1 point [-]

Nobody is calling “a universal decision theory a moral theory”. According to hedonistic utilitarianism, and indeed all consequentialism, all actions are morally significant.

‘Moral’ means regarding opinions of which actions ought to be performed.

Comment author: DanArmak 12 October 2016 01:32:43PM 1 point [-]

So "morals" is used to mean the same as "values" or "goals" or "preferences". It's not how I'm used to encountering the word, and it's confusing in comparison to how it's used in other contexts. Humans have separate moral and a-moral desires (and beliefs, emotions, judgments, etc) and when discussing human behavior, as opposed to idealized or artificial behavior, the distinction is useful.

Of course every field or community is allowed to redefine existing terminology, and many do. But now, whenever I encounter the word "moral", I'll have to remind myself I may be misunderstanding the intended meaning (in either direction).

Comment author: UmamiSalami 17 October 2016 11:54:24PM 0 points [-]

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called "a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals," but I don't see how that changes anything about the definition of a moral theory.