You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on High energy ethics and general moral relativity - Less Wrong Discussion

8 Post author: maxikov 21 June 2015 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: TheAncientGeek 23 June 2015 12:57:39PM *  3 points [-]

[Utilitarianism is] very good. It's more or less reliably better than anything else. 

That's a sweeping claim. A number of people have made similar points, but I'll weigh in aanyway:-

Its pretty nearly the case that there is nothing to judge an ethical theory by except intuition, and utilitarianism fares badly by that measure. (One can also judge a theory by how motivating it is, how consistent it is, and so on. These considerations might even make us go against direct intuition, but there is no point in a consistentl and/or motivating system that is basically wrong).

One problem with utilitarianism is that it tries to aggregate individual values, making it unable to handle the kinds of values that are only definable at group level, such as equality, liberty and fraternity.

Since it focuses on outcomes, it is also blind to the intention or level of deliberateness behind an act. Nothing could be more out of line with everyday practice, where "I didn't mean to" is a perfectly good excuse, for all that it doesn't change any outcomes.

Furthermore, it has problems with obligation and motivation. The claim that the greatest good is the happiness of the greatest number has intuitive force to some, but regarded as an obligation it implies one must sacrifice oneself until one is no longer happier or better off than anyone else .. it is highly demanding. On the other hand, it is not clear where the obligation comes from, since the is-ought gap has not been closed. In the negative case, utilitarianism merely suggests morally worthy actions, without making them obligatory on anyone. It has only two non arbitrary points to set a level of obligation at, zero and the maximum.

Even if the bullet is bitten, and it us accepted that “maximum possible altruism is obligatory”, the usual link between obligations and punishments is broken. It would mean that almost everyone is failling their obligations but few are getting any punishment (even social disapproval).

That's without even getting on to the problem arising from mathematically aggregating preferences, such as utility monstering, repugnant conclusions, etc.