I was discussing utilitarianism and charitable giving and similar ideas with someone today, and I came up with this hybrid version of the trolley problem, particularly the fat man variation, and the article by Scott Alexander/Yvain about using dead children as a unit of currency. It's not extremely original, and I'd be surprised if no-one on LW had thought of it before.

You are offered a magical box. If you press the button on the box, one person somewhere in the world will die, you get $6000, and $4,000 is donated to one of the top rated charities on GiveWell.org. According to the $800 per life saved figure, this charity gift would save five lives, which is a net gain of four lives and $6,000 to you. Is it moral to press the button?

All of the usual responses to the trolley problem apply. To wit: It's good to have heuristics like "don't kill." There's arguments about establishing Schelling points with regards to not killing people. (This Schelling point argument doesn't work as well in a case like this, with anonymity and privacy and randomization of the person who gets killed.) Eliezer argued that for a human, being in the trolley problem is extraordinarily unlikely, and he would be willing to acknowledge that killing the fat man would be appropriate for an AI in the situation to do, but not a human.

There's also lots of arguments against giving to charity, too. See here for some discussion of this on LessWrong.

I feel that the advantage of my dilemma is that in the original extreme altruism faces a whole lot of motivated cognition against it, because it implies that you should be giving much of your income to charity. In this dilemma, you want the $6,000, and so are inclined to be less skeptical of the charity's effectiveness.

Possible use: Present this first, then argue for extreme altruism. This would annoy people, but as far as I can tell, pretty much everyone gets defensive and comes up with a rationalization for their selfishness when you bring up altruism anyway.

What would you people do?

EDIT: This $800 figure is probably out of date. $2000 is probably more accurate. However, it's easy to simply increase the amount of money at stake in the thought experiment.

Edit 2: I fixed some swapped-around values, as kindly pointed out by Vaniver.

New to LessWrong?

New Comment
63 comments, sorted by Click to highlight new comments since: Today at 1:13 AM

Reversal test: If this miracle of people dying and corresponding sums of money magically appearing in charity funds was commonplace, what debate would follow a hypothetical technology that terminates the miracle?

If this miracle of people dying and corresponding sums of money magically appearing in charity funds was commonplace, what debate would follow a hypothetical technology that terminates the miracle?

It's possible to buy life insurance and specify a charity as the beneficiary.

Note that it could not be so commonplace as to reduce the marginal value of money in the coffers of these charities.

Note that it could not be so commonplace as to reduce the marginal value of money in the coffers of these charities.

Or, perhaps, it has been going on long enough that the last batch of people saved have had time to breed. If so the button pushers will struggle to keep up with the exponential growth.

Perhaps goods with innate value, such as difficult to synthesise medicines, are spontaneously generated instead of money.

Note that the Reversal Test is written with the assumption of consequentialism, where there's an ideal value for some trait of the universe, whereas the whole point of the trolley problem is that the only problem is deontological, assuming the hypothetical pure example where there are no unintended consequences.

However, the Reversal Test of things like "prevent people from pulling the lever" is still useful if you want to make deontologists question the action/inaction distinction.

I can't hear about pushing buttons for money without thinking of this funny video.

The original, full-length movie sucked something fierce. This version is a lot better.

The obvious solution being to say "I will continue to press the button unless you give me 100 million dollars"

I will willingly admit that I would press the shit out of that button.

Bravo! Button pressers unite.

Serious question:

I know a variety of ways to make extra money that would be considered unethical by deontological standards. Most of them involve dark arts manipulation and deception. Should permit myself to raise money unethically if it increases the amount I can donate to efficient charity?

I know a variety of ways to make extra money that would be considered unethical by deontological standards. Most of them involve dark arts manipulation and deception.

Sounds like marketing. Or business in general. Welcome to the real world.

Should permit myself to raise money unethically if it increases the amount I can donate to efficient charity?

I don't publicly advocate that you do any illegal stuff. Because publicly advocating illegal stuff is probably illegal itself or at least would make me look very suspicious.

But hey, when it comes to buttons... Press away!

You don't think pushing a killing button violates laws against murder?

Highly unlikely. A random unknown person, probably outside of your jurisdiction "will die"? We don't even have reason to believe the button press on the magical box causes the death rather than being associated via a newcomblike prediction. This is a test of ethics, not much of a legal problem at all.

It seems like in the event that, for example, such buttons that paid out money exclusively to the person pushing became widespread and easily available, governments ought to band together to prevent the pressing of those buttons, and the only reason they might fail to do so would be coordination problems (or possibly the question of proving that the buttons kill people), not primarily from objections that button-pushing is OK. If they failed to do so (keeping in mind these are buttons that don't also do the charity thing) that would inevitably result in the total extermination of the human race (assuming that the buttons paid out goods with inherent value so that the collapse of society and shortage of other humans doesn't interfere with pressing them).

However I agree with your point that this is about ethics, not law.

So if the state of Georgia offers me $20,000 to execute Troy Davis; I can take the money, donate $10,000 to an efficient charity and enjoy the remaining $10,000 with a clean conscious?

I don't believe I said that anywhere, much less here. While in the immediate context my comment was ethically agnostic and made only an estimate of legal consequences.

What is it about moral questions that makes people so desperate to play reference class tennis?

As with most trolley problems, I have to separate out "what's the right thing to do?" from "what would I do?"

In the situation as you described it, with the added proviso that there is no other unstated-but-relevant gotcha, the right thing to do is press the button. (By "gotcha" here I mean, for example, it doesn't turn out that by pressing the button I also cause milions of other people to suffer, and you just didn't mention that part.)

Were I in that situation, of course, I would be solving an entirely different problem, roughly statable as "I am given a box with a button on it and am told, by a not-particularly-reliable source, that these various conditions apply, yadda yadda." I would probably conclude that I'm in some kind of an ethical research lab, and try to decide how likely it was that the researchers would actually kill someone, and how likely it was that they would actually give me money, and how likely it was that video of me pressing the "kill a random person" button would appear on YouTube, etc. Not really sure what I would ultimately conclude.

If I were in the situation and somehow convinced the situation was as you described it and no gotchas existed, I probably wouldn't press the button (despite believing that pressing the button was the right thing to do) because I'd fear the discomfort caused by my irrational conscience, especially if the person who died turned out to be someone I cared about. But it's hard to say; my actual response to emotional blackmail of this sort is historically very unpredictable, and I might just say "fuck it" and press the button and take the money.

the right thing to do is press the button.

Why? Do we really need more people on this planet? I would be more likely to press the button in a net-neutral case (one saved, one dies, more money for me), provided your other conditions (not a research, not a joke, full anonymity, etc.) hold.

Alternative rephrasing: $4000 dollars is given to your choice of either one of the top-rated charities for saving lives, or one of the top-rated charities for distributing birth control (or something else that reduces population growth).

That means a pure reduction on both sides in number of people on the planet, and- assuming there are currently too many people on the planet- a net reduction in suffering in the long run as there are fewer people to compete with each other, plus the good it does in the short run to women who don't have to go through unwanted pregnancies and raising the children and all the benefits associated with that (like being able to devote more resources to their other children, or possibly pursuing careers further, or the like).

I'm not prepared at the moment to have a serious discussion about whether extending lives is better than terminating them, but I certainly agree that if the charities which receive the cash are going to do something wrong with it then pressing the button that gives them the cash is correspondingly wrong and I ought not do it.

As for the actual question: not all lives are equal, and the sort of life you can extend for $2,000 (or your old $800 figure) is probably not worth .2-.5 times as much as a life chosen at random across the Earth, and the difference is probably larger than the few thousand dollars you pick up.

That's a good answer. However, I can just change the dollar values to increase it to 30 people saved, or whatever. I wouldn't call it a LCPW (least convenient possible world) issue, but it's an easy modification to the thought experiment.

However, I can just change the dollar values to increase it to 30 people saved, or whatever.

Sure, but it seems to me that the effectiveness of the box is (should be) a critical factor in deciding whether or not to use it. If the number of people saved is small enough, it seems like the box shouldn't be used; if the number of people saved is big enough, it seems like the box should be used. The point at which we switch from not using it to using it is a number we should be willing and able to calculate before we use the box.

LCPW wouldn't have this issue.

It's not clear to me that objection is meaningful in this case. I'm not ducking the question- I'm making the statement that lives have dollar values associated with them, and you need to use those values for judgments rather than counting heads. If, in the LCPW, everyone were equally valuable, then my approach gives the answer I would expect it to give: it's better for 5 people to live than 1.

I'm comfortable with how this reasoning extends. Consider a real-world example: instead of a magic box this is an actual policy question- "should we run a polluting power plant which is personally profitable and powers a hospital, thus saving lives, even though we know the pollution will kill someone?" Put a dollar cost on the life lost due to pollution, put dollar values on the lives saved by the hospital, add the values to your profit and subtract the life lost, and you have a cost-benefit analysis.

Rather than being a vague ethical question, that highlights some practical policy advice. Pollution should be taxed, so that the cost of the life lost due to pollution is forced to be part of my calculation. That's also advice I live, whereas I don't know if I would push fat men towards trains: I think that the optimal level of pollution is not 0.

How about adding and subtracting 'years of averagely happy life'

People have different willingness and ability to pay for life and happiness, so you still ought to adjust by that. But if you have an option that makes you richer and on net increases the cost-adjusted happiness/life of other people, then that sounds like an option worth considering.

Where'd you get the $800 per life saved figure? GiveWell.org says of their top charity:

We estimate that just under $2,000 spent on LLIN distributions saves a life. This does not include other benefits of ITNs. And this doesn't count donated money that is not spent on LLIN distributions.

I got it from Yvain's dead babies article. If that number is wrong, feel free to substitute other monetary values of your choice.

I think you have the 4,000 and 6,000 numbers swapped, or your description later is mistaken.

I don't think so. What did you think was my mistake?

All emphasis mine. First part:

you get $4000, and $6,000 is donated to one of the top rated charities on GiveWell.org.

Later part:

In this dilemma, you want the $6,000

Oh yeah, you're right. Thanks.

I read it the same way, I think. You say initially that you pocket 4k and 6k is given to charity. Later you talk as if you are pocketing the 6k. My guess is that the mistake is in the original description, since you later say that five lives are saved at $800 per life.

You are offered a magical box. If you press the button on the box, one person somewhere in the world will die, you get $4000, and $6,000 is donated to one of the top rated charities on GiveWell.org. According to the $800 per life saved figure, this charity gift would save five lives, which is a net gain of four lives and $6,000 to you. Is it moral to press the button?

Yes. Sure, it increases the population, which is a shame, but I can do plenty of good with that $6k (including buying delicious cookies and or donating to the SIAI.)

Increasing the population is not a shame if the average human is wealth producing. Which they are.

This is true IFF you value wealth above all other measures.

If you value net-happiness for example, it's not true.

I think if you value net-happiness it's still not a shame, although it may be a shame if you value median-preference-satisfaction.

There are arguments that valuing net-happiness IN OUR CURRENT WORLD means you'd want to increase the human population.

However, in an arbitrary world, where wealth-production correlates with human population, there's no reason to assume that net-happiness would also correlate with wealth-production.

IOW: his conclusion (it's not a shame) has a truth value that depends on value system, but his reasoning is true only if you have one, very specific, value system (you value near-future-wealth-production as your terminal value)

Or, for that matter, if you value probability of human life not going extinct.

Wrong.

Subsequently, Kahnemann and Deaton have found that while life satisfaction, a judgment about how one's life is going overall, does continue to rise with income, the quality of subjective experience improves until an annual income of about $75K and then plateaus. They conclude that "high income buys life satisfaction but not happiness [i.e., subjective experiential quality], and that low income is associated both with low life evaluation and low emotional well-being."

What's average world income? About $8K per year! The typical experience of a human being on Earth is "low life evaluation and low emotional well-being" due to too little money. How many times does global GDP need to double in order to put the average person at Kahnemann's $75K hedonic max-out point? Three and change. But life satisfaction ain't worth nothin', and it keeps rising. And, of course, rising income doesn't just correlate with rising happiness, but with better health, greater longevity, more and better education, increased freedom to choose the sort of life one wants, and so on. If it's imperative to improve the health, welfare, and possibilities of humanity, growth is imperative.

Why Economic Growth Totally Is Imperative

This quote does not mean "anyone earning less than happiness-plateau-level money has a life not worth living".

Do you know how the $75,000 was reached? More specifically, I'm wondering where holder of the $75,000 is located. Is it specifically an amount of 75k where life satisfaction plateaus or is it a certain standard of living that would require more or less funds if the random plateaued person was in a different city? If they both earn the same amount, does someone from Oklahoma City have an equivalent amount of satisfaction as one in Palo Alto?

If you press the button on the box, one person somewhere in the world will die, you get $6000, and $4,000 is donated to one of the top rated charities on GiveWell.org.

Remove the randomisation of the victim and you have the perfect artefact for a Death Note rewrite.

I would have problems pushing the button just because of popular culture cached thoughts) about what would happen after I pushed the button.

I think the problem here is that our intuitive moral judgement decides to deny the premise. How do you make a magic box with the properties you state without being inherently amoral? How would you practically arrange for someone, somewhere to die when a button is pressed unless you were someone who inherently either liked killing people, or didn't care about it? Our intuition is that there has to be a better way to save lives than making a deal with the devil.....

Our moral sense reacts to the social situation. You all know the fat man variation, where you are offered the option of pushing a fat man off a bridge. This seems morally repellent to most people. Make one small change - ask the fat man to jump off the bridge instead of pushing him off - and the whole dilemma changes dramatically in its moral nature.

without being inherently amoral

You mean immoral.

Thought experiments are always fairly unrealistic, which is one way of explaining why you wouldn't save the five lives in the original trolley problem: there's no way you'd be sure enough that the situation was as clear cut as it is presented in the thought experiment. This is the reason why we use the "no killing" rule, and also why people are uncomfortable with the "kill one person" option.

You didn't say if you'd press the button.

There's a more practical one, or at least one that doesn't require being deliberately set up by a diabolical figure, that's quite similar to the trolley one. Details here http://www.friesian.com/valley/dilemmas.htm but briefly and edited to be more clear-cut:

An underwater tunnel is being constructed despite an almost certain loss of several lives. At a critical moment when a fitting must be lowered into place, a workman is trapped in a section of the partly laid tunnel. If it is lowered, it will surely crush the trapped workman to death. Yet, if it is not and a time consuming rescue of the workman is attempted, the tunnel will have to be abandoned and the whole project begun anew. Ten workmen have already died in the project as a result of anticipated and unavoidable conditions in the building of the tunnel. What should be done? Was it a mistake to begin the tunnel in the first place? But don't we take such risks all the time?

The strong temptation here is to say 'we shouldn't build the tunnel', but I don't think that's a practical response.

Again, the intuition here is to deny the premise. Why should this delay result in scrapping the project? Why not just hoist the section back up, nip in, grab the worker, and lower it back down? Since it hasn't been lowered all the way, presumably it's still attached to the crane.

That said, if one accepts the premise, and accepts that it's really necessary to construct the tunnel for whatever reason, and worth the certain loss of lives, then yes, it's most practical to crush the guy and move on with the project.

As for the button - having read some short story or other about a case like this where the person killed turns out to be the button-pusher's wife, I would hesitate to push the button unless I knew it was a truly random process. Moral considerations aside, if it's going to kill my mother or something I would certainly not press it, not even if it saves five other lives.

That said, if one accepts the premise, and accepts that it's really necessary to construct the tunnel for whatever reason, and worth the certain loss of lives, then yes, it's most practical to crush the guy and move on with the project.

I would question the practicality, given that it has rather significant externalities with respect to the effect on all the construction workers on the project. It seems like a situation where inefficiency could be useful for cooperation between the workers. ("Leave no man behind!")

Moral considerations aside, if it's going to kill my mother or something I would certainly not press it, not even if it saves five other lives.

If it's someone I care about I'm not just going to not press it I'm going to destroy the button device and all others like it that I can find!

I don't see how exactly a tunnel would have such a critical piece that failure to land it at critical time would require the project to start anew. Such circumstances are actively avoided in engineering.

It is really interesting that people who try to make up such 'kill to save a life' scenario invariably end up having some major error with regards to how something works, which they try to disguise as a trivial low level detail which they urge us to ignore. Normally, if you aren't trying to trick someone into some fallacy, it is quite easy to come up with a thought experiment which does not have tunnels that are built cardhouse style and have to be abandoned and started afresh due to failure to lower 1 piece in time.

There's the very simple scenario for you guys to ponder: you have $100 000 , you can donate $10 000 to charity without noticeable dip in your quality of life, and that could easily save someone's life for significant timespan. Very realistic, happens all the time, likely is happening right now to you personally.

You don't donate.

Nonetheless you spend inordinate time conjecturing scenarios where it'd be moral to kill someone, instead of idk working at some job for same time, making $ and donating it to charity.

Ponder this for a while, do some introspection with regards to own actions. Are you moral being that can be trusted with choosing a path of action that's the best for common good? Hell no, and neither am I. Are you even trying to do moral stuff correctly? No evidence of this happening, either. If you ask me to explain that kill-1-to-save-N scenario inventing behaviour, I'd say, probably some routine deep inside is simply interested in coming up with advance rationalization for homicide for money, or the like, to broaden the one's, hmm, let's say, opportunities. For this reason, rather than coming up with realistic scenarios, people come up with faulty models where killing is justified, because deeply inside they are working for the purpose of justifying a killing using a faulty model.

I wouldn't press the button. Basically the reason is that the maker of the magic box is responsible for his own moral actions, and I don't want to have any part of it.

[-][anonymous]12y10

I got some results I didn't quite expect while thinking about this.

I assumed that if it was good to press the button once, it would be good to consider the effects of pressing the button a lot, and so I tried considering what came up with a scenario that was equivalent on a larger scale:

You're being given an opportunity to break a regulatory tie to approve a massive new mosquito net factory in China.

The mosquito net factory is equal to pressing the button 100,000 times a year. Assume you personally couldn't do that, without getting carpal tunnel.

So it will stop 500,000 Malaria deaths a year, (There are actually more than that many: http://www.who.int/mediacentre/factsheets/fs094/en/) and not only that, but if you approve it, You'll receive a lucrative stock gift, which will pay you 600 MILLION dollars a year. You'll have a golden parachute for the rest of your life! For the purposes of this scenario, that's legal, according to new stock regulations.

Okay, yes, it is VERY polluting. But China doesn't have any air quality standards for these pollutants, so again, it's all legal, and the projected deaths are only 100,000 people a year, mostly in China.

But still, we're saving 400,000 people's lives every year! And we know it's going to be at least one year for the circumstances to change.

And hey, if you want to use your money to invest in better air pollution scrubbers for the plant, go right ahead! I don't have the cost benefit ratios on that right now, since I just had all of my analytics people generate the numbers for the factory.

It seems like the right answer to the scaled up problem is "Shouldn't I either run or have run for me the cost benefits ratio on the air pollution scrubbers BEFORE making a decision which costs at least 100,000 deaths?" It would also seem likely that the result would be that I could install scrubbers at some price below 600,000,000 dollars a year, and then I go home happy with no deaths and the remaining money.

But if I attempt to run the cost benefits on the magic box, it occurs to me that the default assumption is that there IS no cost benefit ratio which applies to saving those people. It feels like the implied result is that they're just magically executed with no protection. Even if they're a billionaire who has invested in cryonics, is in perfect health, and is waiting in a hospital with doctors and cryonicists, they're just permanently dead and can't be saved.

However, that assumption isn't necessarily TRUE. There's no evidence that those people are unsavable in the small scale, just that they likely are in an isomorphic large scale where the deaths actually have a non magical cause.

So if the first thing I would probably have to do is attempt to figure out "What is the cause of the mystery deaths from the box, is it mitigatable, and at what price other than not pressing the button?

Assuming the likely answer "The world is inconvenient, Omega will execute those people and Omega can't be stopped." Then I have a feeling I would end up paralyzed by "But isn't there a way to save everyone?" Except, I'm not paralyzed by that, because if I was, then I would already be paralyzed by that, because I'm faced with those decisions, and I don't feel paralyzed by that. But then again, I don't usually face Omega either.

So in the large scale, build the mosquito net factory, build the air scrubbers with my personal money, save everyone, make money. In the small scale, "Building the scrubbers" is essentially "Kill Omega, and use his box without him going around executing people." But killing Omega is assumed impossible (A potential further complication, Omega may need to be alive for the box to work.)

This makes it seem like my actual answer is "Shove your box Omega, I'm going to make money off of an environmentally safe for profit mosquito net factory using your technology and save everyone." I'm not sure if I should change that answer or not, or if it even makes sense. But it appears to be my current answer.

I'll try to think about this and see if I come up with a better answer.

Considering that the people GiveWell saves have a life expectancy greater than 1/4 that of the average person, and that they are also probably younger than the average person, and that the theory of happiness set points predicts that they aren't significantly less happy, I would press the button. But only if I was sure nobody would find out, and that the person killed wouldn't be anybody I cared about, and that the situation was what it looked like (see TheOtherDave's comment). Also I might be too scared to do so based on cultural norms, but I hope not.

If I were to assume that all human lives had equal utility, I'd press the button, many times. But I don't think that lives-saved is a very good utility metric. If a person is saved from, say, malaria and malnutrition, and then grows up poor and uneducated and becomes a subsistence farmer, contracts HIV and dies, leaving behind several starving children, I am completely unabashed about assigning a lower utility to their life than someone living in healthier circumstances.

As the scenario is formulated, I don't think there's enough money to make me confident that the scales tip in favor of pressing it. But I'd do it for more money, or a guarantee that the person dying would be in similarly poor circumstances to the people being helped by the charities.

As posed, I'm not sure what I would do, or whether pressing the button is moral.

"You are offered a magical box. If you press the button on the box, one person somewhere in the world will die.."

Well I don't believe in real magic ( I do believe in magicians who do clever tricks ), so the question is immediately hypothetical.

But leaving that aside, there is the question of the mechanism by which the person dies. If it was simply that aid was diverted from that person to another person, then I would probably have no problem pressing the button ( I might question whether I deserve the cash for myself and refuse it ).

If instead the person is to be executed, then I would have a problem.

If the person is to die by means of magic, well, I don't believe the person who is telling me this, so it gets complicated.

If it was simply that aid was diverted from that person to another person, then I would probably have no problem pressing the button

Good point. If it were an expected life lost vs. 4 expected lives saved, by taking $x from SCI and giving 4*$x to AMF, nobody would have a problem pressing the button. So it might come down to intuitions about risk tolerance.

Two things.

First, it is worth having the distinction between what is permissible to do and what is obligatory to do. One might plausibly think that in this case, it is permissible to push the button but not obligatory to do so.

Second, an interesting follow-up question, I think, is the point at which people tip. If you are like me, then you might balk at pressing the button for such small dollar amounts, but also, if you are like me, you probably have a dollar value at which you would flip to thinking that pressing the button is permissible. I actually think that at some value, pressing the button becomes obligatory. But what are those values? Can rational agents with the same evidence disagree about them?

First, it is worth having the distinction between what is permissible to do and what is obligatory to do. One might plausibly think that in this case, it is permissible to push the button but not obligatory to do so.

If you get an answer of "permissible but not obligatory", then you aren't finished; you've only concluded that it isn't overwhelmingly slanted in one direction, but you still need a decision.

If you get an answer of "permissible but not obligatory", then you aren't finished; you've only concluded that it isn't overwhelmingly slanted in one direction, but you still need a decision.

Why? In terms of algorithms, this might just be a place where you want to flip a coin. Or do you think that admissible decision procedures should always give the same answer to the same question? (If so, I'd love to know why you think that.)

But also, depending on whether you think rational agents with the same evidence can disagree about the button, you might think that "permissible-not-obligatory" is a worthwhile social category, even if you don't think it ever obtains for an individual. That is, you might want a set of laws that allow such acts but do not punish people if they choose not to perform such acts.

The life expectancy of those 'saved lives' is fairly low though (the quality of life often too) .

edit: to make a practical example, if you are saving 1 life by giving out a lot of mosquito nets, the life you saved is still usually a malnourished child who's dying of hunger (and is going to die of something else if malaria doesn't get him first). It's not as easy to save a life as you would think. Or rather, it is easy to "save a life" in the very specific abstract sense. One glucose injection "saves a life" of starving child - for a day.