[epistemic status: attempt to quickly summarize how I feel about efforts to promote more diverse moral foundations, push back against cost-benefit comparisons as "utilitarianism", or split the difference between this notion of "utilitarianism" and rival approaches]

 

I think human value is incredibly complicated; and I don't think it's strictly about "harm/care" (to use Haidt's term), or happiness, or suffering. Indeed, I suspect a utopian society would value transhuman versions of "beauty", "sanctity", etc. that sometimes have nothing to do with optimizing anyone's subjective experience.

And yet, in everyday decision-making and applied ethics, I think the utilitarians, effective altruists, welfare-maximizers, etc. are basically always right. In deciding on an ideal initial response to COVID-19, or an ideal budget breakdown for an aid program, or anything else that affects large numbers of people, it would be unimaginably foolish to try to find a "balance" between the simple "prevent as much disability and death as you can" goal and some other moral framework.

Why?

Well, I think the world is pretty messed up. I think that noticing this makes the case for the SLLRNUETSSTAWMC view pretty trivial. (SLLRNUETSSTAWMC = "superficially looks like (reflective, non-two-boxing) utilitarianism even though strictly speaking things are way more complicated".)

Imagine that you lived in a world dominated by a bunch of political coalitions with organizing principles like "all other moral concerns should be subordinate to making everything smell as much like rancid meat as possible" and "our overriding duty is to make everything smell as much like isoamyl acetate as possible".

Imagine further that this has caused people who care about things like insecticide-treated bednets and world peace to self-identify as "oriented toward harm/care", in contrast to the people who are "oriented toward smell".

The take-away from this shouldn't be "there's nothing good or valuable about making things smell nicer". The take-away should be "the people who agitate about smell in today's world are largely optimizing for totally the wrong smells, and their level of concern for smell is radically out of step with what's actually going on in the world". So out of step, in fact, that if you literally just completely ignore smell in all your altruistic activities (at least until after we've ended disease, warfare, hunger, etc.), you'll do way, way better at improving the future than if you tried to optimize some compromise between the coalitions' views.

Or, to quote from Feeling Moral (critiquing those who base humanitarian decisions on "which option feels more just and righteous and pure" rather than "which option actually helps others the most"):

You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.

And, from The "Intuitions" Behind "Utilitarianism":

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math.

Or: there's a difference between cognitive operations that feel affectively "cold", versus acting in ways that are actually "cold-blooded". If your child is dying of cancer and you're staying up late night after night googling research papers to try to figure out how to save their life, the process may not feel as vital and alive as fighting off a hungry lion to save your child, but... who cares how vital and alive it feels?

I'll care about that stuff after the world's immediate catastrophes have been solved. And during my off hours, when I'm actually listening to music.

New Comment
11 comments, sorted by Click to highlight new comments since:

Maybe moral entrepreneur-ism implies that values expand and shrink in accordance with the handicap principle.

[-]TAG20

And yet, in everyday decision-making and applied ethics, I think the utilitarians, effective altruists, welfare-maximizers, etc. are basically always right. In deciding on an ideal initial response to COVID-19, or an ideal budget breakdown for an aid program, or anything else that affects large numbers of people, it would be unimaginably foolish to try to find a “balance” between the simple “prevent as much disability and death as you can” goal and some other moral framework.

The standard objections to utilitarianism insist that you should find a balance between diffuse benefits and concentrated harms. In fact, the contrarian approach to covid is to place the burden on older and less healthy people so that the rest don't have to endure lockdowns. And that is typically given a utilitarian justification! So the conventioinal approach is less utilitarian -- it compromises utilitarianism with a notion of equal rights, instead of offsetting the lives of older people by the number of QUALYs they have left

[-][anonymous]40

offsetting the lives of older people by the number of QUALYs they have left

Arguably, by failing to account for this we are simply doing the math wrong.  The second part - "rest don't have to endure lockdowns" - the problem with that is you need to somehow equate years in lockdown to years lost when an older person dies.  If someone had n QUALYs, how many years of lockdown equal that?  That seems to be a value judgement, if we knew the ratio we could make the right decision but how does someone decide what the ratio is?

[-]TAG10

If you assume that we are doing utilitarianism, then we might be getting the maths wrong. But the same evidence could mean we are not doing utilitarianism. And there is other evidence that we are not doing utilitarianism, such as the existence of laws and rights.

[-][anonymous]90

Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks.  Notable, that it's actually correct.  Saying you care about something like say "human rights" and acting according to some list of "principles" doesn't produce the optimal outcome for the thing you said about.  The optimal outcome is whatever action is (predicted to based on unbiased past data) maximize the actual utility, say, those human rights.  

The advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility.  But you might be wrong.  Sure, new FDA members might actually listen to evidence and approve additional covid vaccines, but there may be extremely complex impossible to model side effects.  (I am assuming that "you" doing this is a dictator like Stalin, so you are not personally going to suffer any consequence for purging the FDA).  From a utilitarian perspective it's correct, even a 50% chance to save 100k lives would be worth 1000 deaths, but the new bureaucrats might kill even more people.  (by, for example, giving their reports to you as pseudoscience and lying to you, which is usually what happens in dictatorial regimes)

Let me try an example:

rational consequentialism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother".

rational utilitarianism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother".

[assorted other ethical frameworks]: it's wrong to murder your grandmother because it goes against principle #n.  it's wrong to murder your grandmother because a fair poll of your community members would be against it.  It's wrong to murder your grandmother because the law says it is.

I think I have it right.

Note that for vehicle autonomy, the very same situation can and will come up.  "using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car".  

[assorted other ethical frameworks]: it's wrong to accelerate into traffic because it endangers other drivers.  It's wrong to accelerate into traffic because the law says so.

[-]TAG10

Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks. Notable, that it’s actually correct.

That has never been shown.

Saying you care about something like say “human rights” and acting according to some list of “principles” doesn’t produce the optimal outcome for the thing you said about.

That's question beging. You have to define optimal outcome as greatest utility to come to that conclusion . If you define optimal outcome as "greatest utility without violating rights", then it turns out utilitarianism isnt correct.

he advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility. But you might be wrong.

If you can't calculate utility, then you aren't doing utilitarianism

Like every other defender of utilitarianism, you have switched to defending rule consequentialism.

[-][anonymous]10

Consequentialism is a superset of utilitarianism. "Only the consequences matter vs we must seek good consequences for the greatest number".

In practice they are identical for actors with good intentions. Using both ethical frameworks, the most despicable action is allowed and is the right thing to do IF it, based on the data, will result in the best predicted outcome.

I have inserted in 2 assumptions : we don't know ahead of time the consequences of an action merely what we predict they are, and some consequences are so indirect they can't be modeled so we are forced to ignore them.

By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the "best" outcomes.

Let me try an example:

rational consequentialism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother".

rational utilitarianism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother".

[assorted other ethical frameworks]: it's wrong to murder your grandmother because it goes against principle #n. it's wrong to murder your grandmother because a fair poll of your community members would be against it. It's wrong to murder your grandmother because the law says it is.

I think I have it right.

Note that for vehicle autonomy, the very same situation can and will come up. "using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car".

[assorted other ethical frameworks]: it's wrong to accelerate into traffic because it endangers other drivers. It's wrong to accelerate into traffic because the law says so.

You can sort of see how I feel on this. While I also feel a 'shudder' about the thought of murdering someone's grandmother, ultimately if you actually want to do the greatest good for the greatest many - if your goal is to actually achieve whatever your principles are - vs merely giving the appearance of doing so - it appears pretty clear what algorithm you have to use.

[-]TAG10

It's not a simple choice between doing the best thing versus doing something else, because you can't calculate the best thing. You are using heuristics, not algorithms.

Consequentialism is a superset of utilitarianism. “Only the consequences matter vs we must seek good consequences for the greatest number”.

There are multiple forms of utilitarianism and of non-utilitarian consequentialism. In my previous comments I was talking about rule consequentialism.

In practice they are identical for actors with good intentions

Rule consequentialism is a substantively different theory to utilitarianism,
notably giving a different answer to the trolley problem .

I have inserted in 2 assumptions : we don’t know ahead of time the consequences of an action merely what we predict they are

One of the ways rule consequentialism differs from utilitarianism is how to deal with that limitation. RC suggests following rules that generally lead to good consequences. ("Don't push the fat man because killing people generally leads to bad consequences") If utilitarianism suggests following your own judgement , however flawed, it will give worse results than following rules, for sufficiently optimal rules. Utilitarianism as the claim that you should always do what is best is straightforward as a theoretical claim, but much more complex practically...and ethics is practical.

By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes

You cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes by a perfect predictor. But if you are an imperfect predictor, you can do better than your own judgement.

it’s wrong to murder your grandmother because it goes against principle #n.

Rationalists love to portray deontology as a matter of blindly following arbitrary rules ... But where does principle N come from? Maybe it was formulated by a better predictor than you ... maybe it is a distillation of of human experience over the ages.

it’s wrong to murder your grandmother because a fair poll of your community members would be against it.

Is that supposed to be obviously wrong? Why shouldn't collective judgement be better than individual judgement?

It’s wrong to murder your grandmother because the law says it is.

Is that supposed to be obviously wrong? Why shouldn't the law be a formalisation of collective judgement?

it appears pretty clear what algorithm you have to use.

You can't compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.

[-][anonymous]10

You can't compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.

This I think is our point of divergence.  I am not talking about "using your own judgement".  I am talking about collecting sufficient data and using an algorithm of some type.  Also you then validate your predictor's accuracy (this is somewhat debated at the moment how to best go about this).  

Note that the sophistication of predictor you need depends on the difficulty of the problem.  Modeling a falling rock?  A second or third order curve fit should match up to data to within the margin of observation error, and whatever method you use to validate your predictor should show it is nearly perfect.  Modeling the consequences of murdering your grandmother?  Fair enough, I will concede that current methods can't do this.  

However if you can, then it's the correct system to use.  As an example, whether or not the police should be encouraged to kill the moment they feel threatened.  From all of these examples - the law, a community poll, etc - the consensus opinion of the community appears to disagree with the data collected from European countries where police are not encouraged to kill, and they kill far fewer people, without a corresponding increase in police casualties.  Massive numbers of people in the legislature and the community are just wrong.  

[-]TAG10

This I think is our point of divergence. I am not talking about “using your own judgement”. I am talking about collecting sufficient data and using an algorithm of some type

You are talking about who collecting data?

However if you can, then it’s the correct system to use

That just means that utilitarianism is theoretically correct, in the sense that it gives the right answer given all the data and infinite compute. I've already addressed that: ethics is practical. It's intrinsically about his to solve real world problems.

As an example, whether or not the police should be encouraged to kill the moment they feel threatened. From all of these examples—the law, a community poll, etc—the consensus opinion of the community appears to disagree with the data collected from European countries where police are not encouraged to kill, and they kill far fewer people, without a corresponding increase in police casualties. Massive numbers of people in the legislature and the community are just wrong.

The claim that ethics needs a deontological component is compatible with the claim that some existing deontological systems are flawed from the consequentialist point of view. That is a way that rule consequentialism differs from absolute deontology. Unfortunately, the rationalsphere keeps criticising absolute deontology as though it's the only kind.

And wanting to replace flawed rules with better rules isn't at all the same as wanting to abandon rules altogether....rule consequentialism isn't utilitarianism, and utilitarianism isn't just basing ethics on consequences. It's noticeable that a lot of people who say they are utilitarians aren't in the business of breaking laws or wanting to abolish all laws, for all that they insist that deontology is crazy.

[-]TAG10

You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more

Is that objective worth, or a feeling you have?