Utilitarianism sometimes supports weird things: killing lone backpackers for their organs, sacrificing all world's happiness to one utility monster, creating zillions of humans living on near-subsistence level to maximize total utility, or killing all but a bunch of them to maximize average utility. Also, it supports gay rights, and has been supporting them since 1785, when saying that there's nothing wrong in having gay sex was pretty much in the same category as saying that there's nothing wrong in killing backpackers. This makes one wonder: if despite all the disgust towards them few centuries ago, gay rights have been inside the humanity's coherent extrapolated volition all along, then perhaps our descendants will eventually come to the conclusion that killing the backpacker has been the right choice all along, and only those bullet-biting extremists of our time were getting it right. As a matter of fact, as a friend of mine pointed out, you don't even need to fast forward few centuries - there are or were already ethical systems actually in use in some cultures (e.g. bushido in pre-Meiji restoration Japan) that are obsessed with honor and survivor's guilt. They would approve of killing the backpacker or letting them kill themselves - this being an honorable death, and living while letting five other people to die being dishonorable - on non-utilitarian grounds, and actually alieve that this is the right choice. Perhaps they were right all along, and the Western civilization bulldozed through them effectively destroying such culture not because of superior (non-utilitarian) ethics but for any other reason things happened in history. In this case there's no need in trying to fix utilitarianism, lest it suggest killing backpackers, because it's not broken - we are - and out descendants will figure that out. In physics we've seen this, when an elegant low-Kolmogorov-complexity model predicted that weird things happens on a subatomic level, and we've built huge particle accelerators just to confirm - yep, that's exactly what happens, in spite of all your intuitions. Perhaps smashing utilitarianism with high energy problems only breaks our intuitions, while utilitarianism is just fine.

But let's talk about relativity. In 1916 Karl Schwarzschild solved the newly discovered Einstein field equations and thus predicted the black holes. It was thought as a mere curiosity and perhaps GIGO at the time, until in 1960s people realized that yes, contra all intuitions, this is in fact a thing. But here's the thing: they were actually first predicted by John Michell in 1783. You can easily check it: if you substitute the speed of light to the classical formula for escape velocity, you'll get the Schwarzschild radius. Michell actually knew the radius and mass of the Sun, as well as the gravitational constant precisely enough to get the order of magnitude and the first digit right when providing an example of such object. If we somehow never discovered general relativity, but managed to build good enough telescopes to observe the stars orbiting the emptiness that we now call Sagittarius A*, if would be very tempting to say: "See? We predicted this centuries ago, and however crazy it seemed, we now know it's true. That's what happens when you stick to the robust theories, shut up, and calculate - you stay centuries ahead of the curve."

We now know that Newtonian mechanics aren't true, although they're close to truth when you plug in non-astronomical numbers (and even some astronomical). A star 500 times size and the same density as the Sun, however, is very much astronomical. It is only sheer coincidence that in this exact formula relativistic terms work exactly in the way to give the same solution for the escape velocity as the classical mechanics do. It would be enough for Michell to imagine that his dark star rotates - a thing that Newtonian mechanics say doesn't matter, although it does - to change the category of this prediction from "miraculously correct" to "expectedly incorrect". It doesn't mean that Newtonian mechanics weren't a breakthrough, better than any single theory existing at the time. But it does mean that it would be premature to people in pre-relativity era to invest into building a starship designed to go ten times the speed of light even if they could - although that's where "shut up and calculate" could lead them.

And that's where I think we are with utilitarianism. It's very good. It's more or less reliably better than anything else. And it managed to make ethical predictions so far fetched (funny enough, about as far fetched as the prediction of dark stars) that it's tempting to conclude that the only reason why it keeps making crazy predictions is that we haven't yet realized they're not crazy. But we live in the world where Sagittarius A* was discovered, and general relativity wasn't. The actual 42-ish ethical system will probably converge to utilitarianism when you plug in non-extreme numbers (small numbers of people, non-permanent risks and gains, non-taboo topics). But just because it converged to utilitarianism on one taboo (at the time) topic, and made utilitarianism stay centuries ahead of the moral curve, doesn't mean it will do the same for others.

New Comment
15 comments, sorted by Click to highlight new comments since:

If you're using the argument that utilitarianism supported gay rights before it was cool to do so, then I feel the need to point out that I skimmed that article, and Bentham says that out of necrophilia, bestiality, homosexuality and masturbation, masturbation is the most damaging to health.

The impropriety then may consist either in making use of an object

  1. Of the proper species but at an improper time: for instance, after death.

  2. Of an object of the proper species and sex, and at a proper time, but in an improper part.

  3. Of an object of the proper species but the wrong sex. This is distinguished from the rest by the name of paederasty.

  4. Of a wrong species.

  5. In procuring this sensation by one's self without the help of any other sensitive object.

...

Of all irregularities of the venereal appetite, that which is the most incontestably pernicious is one which no legislator seems ever to have made an attempt to punish. I mean the sort of impurity which a person of either sex may be guilty of by themselves. This is often of the most serious consequence to the health and lasting happiness of those who are led to practise it.

You can't just cherry pick his support for gay rights to argue that he supported modern sexual norms 200 years early.

[-]Jiro140

Read your own article--I already knew what was wrong with it because Scott Alexander misunderstood it on Slate Star Codex in the same way. Bentham believes that all homosexuality is pedophiliac and that therefore homosexuals will have to marry women in order to have relationships that are long-lasting and where their partner reciprocates their affection Therefore, homosexuality doesn't harm straight marriage. That does conclude that homosexuality is harmless, but not by a chain of reasoning we would approve of.

[Utilitarianism is] very good. It's more or less reliably better than anything else.

That's a sweeping claim. A number of people have made similar points, but I'll weigh in aanyway:-

Its pretty nearly the case that there is nothing to judge an ethical theory by except intuition, and utilitarianism fares badly by that measure. (One can also judge a theory by how motivating it is, how consistent it is, and so on. These considerations might even make us go against direct intuition, but there is no point in a consistentl and/or motivating system that is basically wrong).

One problem with utilitarianism is that it tries to aggregate individual values, making it unable to handle the kinds of values that are only definable at group level, such as equality, liberty and fraternity.

Since it focuses on outcomes, it is also blind to the intention or level of deliberateness behind an act. Nothing could be more out of line with everyday practice, where "I didn't mean to" is a perfectly good excuse, for all that it doesn't change any outcomes.

Furthermore, it has problems with obligation and motivation. The claim that the greatest good is the happiness of the greatest number has intuitive force to some, but regarded as an obligation it implies one must sacrifice oneself until one is no longer happier or better off than anyone else .. it is highly demanding. On the other hand, it is not clear where the obligation comes from, since the is-ought gap has not been closed. In the negative case, utilitarianism merely suggests morally worthy actions, without making them obligatory on anyone. It has only two non arbitrary points to set a level of obligation at, zero and the maximum.

Even if the bullet is bitten, and it us accepted that “maximum possible altruism is obligatory”, the usual link between obligations and punishments is broken. It would mean that almost everyone is failling their obligations but few are getting any punishment (even social disapproval).

That's without even getting on to the problem arising from mathematically aggregating preferences, such as utility monstering, repugnant conclusions, etc.

I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.

I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.

If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA on that!

There is almost no argument or evidence that can convince me to put more trust in ZFC than i do PA. I don't think I'm wrong.

I trust low-energy moral conclusions more than I will ever trust abstract metaethical foundational theories. I think it is a mistake to look for low-complexity foundations and reason from them. I think the best we can do is seek reflective equilibrium.

Now, that being said, I don't think it's wrong to study abstract metaethical theories, to ask what their consequences are, and even to believe them a little bit. The analogy with math still holds here. We study the heck out of ZFC. We even believe it more than a little at this point. But we don't believe it more than we believe the intermediate value theorem.

PS: I also don't think "shut up and calculate" is something you can actually do under utilitarianism, because there are good utilitarian arguments for obeying deontological rules and being virtuous, and pretty much every ethical debate that anyone has ever had can be rephrased as a debate about what terms should go in the utility function and what the most effective way to maximize it is.

PA has a big advantage over object-level ethics: it never suggested things like "every tenth or so number should be considered impure and treated as zero in calculations", while object-level ethics did. The closes thing I can think of in mathematics, where everyone believed X, and then it turned out not X at all, was the idea that it's impossible to take every elementary integral algorithmically or prove that it's non-elementary. But even that was a within-system statement, not meta-statement, and it has an objective truth value. Systems as whole, however, don't necessarily have it. Thus, in ethics either individual humans or the society as whole need a mechanism for discarding ethical systems for good, which isn't that big of an issue for math. And the solution for this problem seems to be meta-ethics.

What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?

I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.

But I don't think that's the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, "you know what guys I don't think being gay actually costs any utils! I guess it's fine". And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, "don't be gay" sits very securely in your web of belief. It's bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It's like moral-epistemic page rank. The "don't be gay" node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to "being gay is ok" looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you're doing that you're basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.

PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I'm undecided on that and I'll admit it might be the case that we just fundamentally disagree on values, and that "moral progress" is a random walk. Or not. Or it's a mix. I have no idea.

Utilitarianism does not support anything in particular in the abstract, since it always depends on the resulting utilities, which can be different in different circumstances. So it is especially unreasonable to argue for utilitarianism on the grounds that it supports various liberties such as gay rights. Rights normally express something like a deontological claim that other people should leave me alone, and such a thing can never be supported in the abstract by utilitarianism. In particular, it would not support gay rights if too many people are offended by them, which was likely true in the past.

TL;DR: Once in a while a wild extrapolation of an earlier limited model turns out to match a later, more comprehensive one. This happens in ethics, as well as in physics. Occurrences like that are amplified by the selection bias and should be treated with caution.

(Also, a bunch of applause lights for utilitarianism.)

I agree with the first paragraph of the summary, but as for the second - my point is against turning applause lights for utilitarianism on the grounds of such occurrences, or on any grounds whatsoever. And I also observe that ethics haven't gone as far from Bentham as physics have gone from Newton, which I regard as meta-evidence that the existing models are probably insufficient at best.

my point is against turning applause lights for utilitarianism

yet the OP states

And that's where I think we are with utilitarianism. It's very good. It's more or less reliably better than anything else.

This seems like a normative statement that only makes sense once you have a preference for utilitarianism.

I think this was mainly addressed to people who think it's the end of every question on the subject. In that context, it's toning down.

Also, "I approve of X" cannot be an attempt to shroud X in a positive halo by surrounding it by applause lights.

Utilitarianism is useful in a narrow range where we have a good utility function. The problem is easiest when the different options offer the same kind of utility. For example, if every option paid out in dollars or assets with a known dollar value, then utilitarianism provides a good solution.

But when it comes to harder problems, utilitarianism runs into trouble. The solution strongly depends on the initial choice of utility function. However, we have no apparatus to reliably measure utility. You can easily use utilitarianism to extend other moral systems by trying to devise a utility function that provides the appropriate conclusions in known cases. If I do this for Confucianism, I'm being as utilitarian as someone doing it for Enlightenment teachings.

There is insufficient basis for making such a comparison. It's highly questionable that an ethical system can be "right" in the same way that a physical theory can be "right". There is an obvious standard by which to evaluate the rightness of a scientific theory: just check whether its factual claims accurately describe reality. The "system-is" must match the "reality-is" But a moral system is made out of oughts, not descriptive statements. The "system-ought" should be matching... exactly what? We don't even know, or aren't able to talk about, our "reality-oughts" without reference to either our intuitions or our moral system. If the latter, any moral system is self-referential and thus with no necessary grounding in reality, and if the former, then our foundational morality is our system of moral intuitions, and an ethical system like utilitarianism necessarily describes it or formalises it and may be superfluous. And the entire thesis of your post is that "reality-oughts" may turn out to fly in the face of our intuitions. This undermines the only basis there is for solving the is-ought problem.

The reason you expect some morally unintuitive prescriptions to prevail seems to rely on choosing the systemically-consistent way out of extreme moral dilemmas, however repugnant it may be. Now (I should mention I'm a total pleb in physics, please contradict me if this is wrong) we generally know reality to be self-consistent by necessity, and we aspire towards building self-consistent physical models of the world, at the expense of intuitions. Doing otherwise is (?) including magic as a feature in our model of the world. In the moral realm, to accept inconsistency would be to accept hypocrisy as a necessity, which is emotionally unpalatable just like physical-system inconsistency is confusing. But it is not obvious that morality is ultimately self-consistent rather than tragic. Personally I incline towards the tragedy hypothesis. Bending over backwards for self-consistency seems to be a mistake, as evidenced by repugnant conclusions of one sort or another. The fact that your moral system pits consistency values against object-level values in extreme ethical dilemmas seems to be a strike against it rather than against those object-level values.

About utilitarianism specifically: if you have your zeitgeist-detection-goggles on, it's easy to see utilitarianism as a product of its contemporary biases: influenced by a highly economical worldview. Utility can be described as moral currency in some aspects. It does even worse in introducing glitches and absurdities than its economical counterpart, because it's a totalising ethical notion -- one which aims to touch every aspect of human existence, instead of being confined to the economic realm. Utility is a quantitative approach to value that attempts to collapse qualitatively different values into one common currency of how much satisfaction can be extracted from any of them. My go-to example for this is Yudkowsky's torture vs. dust specks, which fails to distinguish between bad and evil (nuances are, apparently, for unenlightened pre-moderns), upping the amount of badness to arbitrary levels until it supposedly surpasses evil. This kind of mindset is, at its most charitable understanding, a useful framework for a policy-maker that has to direct finite resources to alleviating either a common and slight health problem (say, common colds or allergies) or a rare and deadly disease. Again, a problem that is economic in nature, that has a dollar value. Utilitarianism is also popular around here for being more amenable to computational (AI) applications than other ethical systems. Beyond that, to hail it as the ultimate moral system is excessive and unwarranted.

I like this post, with the possibly-heterodox interpretation that by 'utilitarianism' you mean 'utilitarianism as we currently conceive it'--in particular, I think we're mostly working with a Newtonian utility function that probably gives the wrong answer regarding things like wireheading (unlike the Newtonian physicists we know this isn't right, but the successor isn't quite ready)

[-]gjm00

The actual 42-ish ethical system

42-ish? (The one a city-sized computer will come up with after millions of years of calculation, which is unfortunately completely useless because the question it's the answer to was underspecified?)