You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

army1987 comments on Open Thread, October 1-15, 2012 - Less Wrong Discussion

1 Post author: David_Gerard 01 October 2012 05:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (477)

You are viewing a single comment's thread.

Comment author: [deleted] 01 October 2012 06:02:07PM *  1 point [-]

What's the name of the idea that morality is a scalar rather than a binary property (i.e., rather than asking whether A is moral, one should ask whether A is more moral or less moral than B)? I'm pretty sure I recently saw a discussion of that somewhere in a SEP article, linked to from a comment on LW, but I can't find it now -- and I've been searching for a while.

EDIT: Larks nailed it.

Comment author: Larks 02 October 2012 02:53:02PM 3 points [-]

Scalar is the right word. Scalar consequentialism is a thing. It's possible the comment you're thinking about was one of mine; I've introduced a fair few people to this (IMHO) superior version of consequentialism.

Comment author: [deleted] 02 October 2012 03:51:59PM *  0 points [-]

Thank you! That's the one I was thinking of. For some reason, I incorrectly remembered that it was on the SEP.

EDIT: Why, when I failed to find that on the SEP, I assumed that I misremembered the name and tried using different search keys, as opposed to suspecting that I misremembered the site and searching Google for the same key?

Comment author: RomeoStevens 02 October 2012 01:27:52AM 2 points [-]

utilitarianism...?

Comment author: [deleted] 02 October 2012 08:29:12AM *  1 point [-]

IIRC, that discussion was in the context of utilitarianism/consequentialism, where “[word A] consequentialism” was the moral system where the action that maximizes expected utility is moral and any other action is immoral, and “[word B] consequentialism” was the moral system where an action is more moral than another if it has higher expected utility, even if neither saturates the upper bound, or something like that.

EDIT: on looking at http://plato.stanford.edu/entries/consequentialism/, “[word A]” is “maximizing”.

Comment author: pragmatist 02 October 2012 01:48:26PM *  2 points [-]

The right way to understand the difference between maximizing and satisficing consequentialism is not that the maximizing version treats morality as a binary and the satisficing version treats it as a scalar. Most proponents of maximizing consequentialism will also agree that the morality of an act is a matter of degree, so that giving a small fraction of your disposable income to charity is more moral than giving nothing at all, but less moral than giving to the point of declining marginal (aggregate) utility.

The distinction between maximizing and satisficing versions of utilitarianism is in their conception of moral obligation. Maximizers think that moral agents have an obligation to maximize aggregate utility, that one is morally culpable for knowingly choosing a non-maximizing action. Satisficers think that the obligation is only to cross a certain threshold of utility generated. There is no obligation to generate utility beyond that. Any utility generated beyond that threshold is supererogatory.

One way to think about it is to think of a graded scale of moral wrongness. For a maximizer, the moral wrongness of an act steadily decreases as the aggregate utility generated increases, but the wrongness only hits zero when utility is maximized. For the satisficer, the moral wrongness also decreases monotonically as utility generated increases, but it hits zero a lot faster, when the threshold is reached. As utility generated increases beyond that, the moral wrongness stays at zero. However, I suspect that most satisficers would say that the moral rightness of the act continues to increase even after the threshold is crossed, so on their conception the wrongness and rightness of an act (in so far as they can be quantified) don't have to sum to a constant value.

Comment author: DaFranker 01 October 2012 06:36:05PM 0 points [-]

I don't think this is what you're looking for, but just in case: The Repugnant Conclusion discusses morality systems quite a bit, so it might mention the article or name the idea you're looking for at some point, though I don't remember it if it does. I do remember that the article was entertaining, at least.