Some thought experiments follow this template:

  1. We have a moral intuition
  2. We make some computation to what this intuition implies
  3. We check how we feel about this implication, and it feels counter-intuitive

Then some people bite the (3) bullet. But bullets sometimes (always?) have a counter-bullet.

You can reverse those thought experiments: take ~(3) as your starting moral intuition, and then derive ~(1) which will be counter-intuitive.

For example, you can start with:

  1. I would care about saving a drowning person even if it came at the cost of ruining my suit
  2. There are a lot of metaphorically drowning people in the world
  3. Therefore I should donate all my money to effective poverty alleviation charities

This is called "shut up and multiply".

But you can also use the reverse:

  1. I don't want to donate all my money to effective poverty alleviation charities
  2. A drowning person would cost more to save because it would ruin my suit
  3. Therefore I shouldn't save a drowning person

This is called "shut up and divide" (also related: Boredom vs. Scope Insensitivity).

Step (2) might be eliminating a relevant feature which generates the counter-intuition, or it might be a way to open our eyes to something we were not seeing. And maybe for some thought experiment you find both the assumption and conclusion intuitive or counterintuitive. But that's not the object of this post.

Here I'm just interested in seeing what the reverse of ethical thought experiments look like. I'll put more examples as answer. I would like to know which other ethical thought experiments have this pattern -- that is, an ethical thought experiment that starts with an intuition to derive a counter-intuition, which can be reversed, to instead derive that the initial assumption is the wrong one.

Update: As I'm writing some of them, I realized some ethical thought experiment are presented as a clash of intuitions (so the "reverse" is part of the original presentation), whereas others seem to be trying to persuade the reader to bite the bullet on a certain counter-intuition, and omit to mention the reverse ethical thought experiment.

New Answer
New Comment

9 Answers sorted by

Mati_Roy

60

The violinist

Original:

  1. We should save the violinist
  2. Fetuses are like violinists
  3. Therefore we should save fetuses

Reverse:

  1. We don't care about fetuses
  2. Violinists are like fetuses
  3. Therefore we don't care about violinists (metaphorically)

Olomana

40

I would like to know which other ethical thought experiments have this pattern...

Isn't the answer just "all of them"?  The contrapositive of an implication is always true.

If (if X then Y) then (if ~Y then ~X).  Any intuitive dissonance between X and Y is preserved by negating them into ~X and ~Y.

Yeah that makes sense

Dagon

30

Many of these calculations get more consistent if you bite just one fairly large bullet: sub-linear scaling (I generally go with logarithmic) of value.  Saving a marginal person at the cost of ruining a marginal suit is a value comparison, and the value of both people and suits can vary pretty widely based on context.  

The hardest part of this acceptance is that human lives are not infinite nor incomparable in value.  I also recommend accepting that value is personal and relative (each agent has a different utility function, with different coefficients for the value of categories and individual others), but that may not be fully necessary to resolve the simple examples you've given so far.

Mati_Roy

30

Infanticide

Original:

  1. We don't care about killing a baby before birth
  2. A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
  3. Therefore we don't care about killing a 1 minute-old baby

Reversed:

  1. We care about killing a 1 minute-old baby
  2. A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
  3. Therefore we care about killing a baby before birth

Isn't the original argument here just the Sorites "paradox"?

  1. We don't care about killing a single fertilized human cell
  2. A human of any age is almost the same as a human of that age minus one minute
  3. Therefore we don't care about killing a human of any age

This proves too much. No ethical system I'm familiar with holds that because (physical) things change gradually over time, no moral rule can distinguish two things.

2Mati_Roy
Ah, I actually had just came up with that one (am now realizing "original" wasn't the right word for this one) -- thanks for bringing up this "paradox"!

Mati_Roy

30

The Non-identity problem

Original:

  1. We only care about things if they are bad/good for someone
  2. Using a lot of resources isn't bad for people in the future, it just changes who lives in the future
  3. Therefore we don't mind that people in the future are having a bad time because of our consumption

Reversed:

  1. We care that people in the future are having a bad time because of our consumption
  2. Consuming isn't bad for specific people in the future, it just changes who lives in the future
  3. Therefore we don't only care about thing if they are bad/good* for someone, but also about what kind of lives we bring into existence

Mati_Roy

30

Dust specs vs torture

I feel like this one was presented as a clash of 2 intuitions, so both the "reversed" is also in the original presentation.

Original:

  1. We prefer X people experiencing Y pain than 1,000 people experiencing 2*Y pain AND this preference is true for all X, Y element of the real numbers
  2. This can be chained together multiple times
  3. We prefer 1 person experiencing 50 years of torture to a googolplex people having specs of dust in their eyes

Reversed:

  1. We prefer a googolplex people having specs of dust in their eyes to 1 person experiencing 50 years of torture
  2. There's some threshold of pain for which we care lexically more about
  3. We can more about 1 person experiencing slightly more pain than this threshold than a large number of people experiencing slightly less pain than this threshold

keyword to search: lexical threshold negative hedonistic utilitarianism

The original 1 seems pretty clearly false here if X >> 1000 for basically any value of Y.

2Mati_Roy
Woops, I meant 1,000*X
1seed
And Y/2 pain, probably? (Or the conclusion doesn't follow.)
2Mati_Roy
Oops, right!
2Mati_Roy
Ahhh, yep, thanks

Mati_Roy

30

Experience machine

Original:

  1. We only care about our happiness
  2. An hypothetical happiness machine could bring us the most happiness
  3. Therefore we want to live in happiness machine

Reversed:

  1. We don't want to live in an happiness machine
  2. An happiness machine only brings us happiness
  3. Therefore we care about other things than happiness

Mati_Roy

30

Trolley problem / transplant

Original:

  1. We want to take actions to save more people
  2. Survival lotteries save more people just like pulling the lever does
  3. Therefore we support survival lotteries

Reversed:

  1. We don't support survival lotteries
  2. Pushing the lever is an action that changes who dies just like the survival lotteries does
  3. Therefore we don't support pulling the lever

Could do the same with pulling a lever vs pushing a person

Mati_Roy

30

Utility monster

Original:

  1. We care about increasing happiness
  2. If there was a being that had by far the highest capacity for happiness, they might be the best way to increase happiness even at the cost of everyone else
  3. We care about utility monsters the most (which violates the egalitarian intuition)

Reversed:

  1. We care about each beings equally
  2. If there was a being that had by far the highest capacity for happiness, we still wouldn't give them more resources
  3. We don't care about increasing total happiness
2 comments, sorted by Click to highlight new comments since:

This type of argument is called "proof of contradiction". You start by supposing is true. Then you do a bunch of a logic which assumes is true. If, at the end, you prove something wrong then is false. Proofs by contradiction are frequently used in mathematics where (compared to morality) it's easy to ensure your logic remains ironclad.

I feel like this is something different; X isn't proven true or false here -- we just prove that if X then Y, and then also if ~Y then ~X