A trolley problem is something that's used increasing often in philosophy to get at people's beliefs and debate on them. Here's an example from Wikipedia:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:

1. It assumes perfect information about outcomes.

2. It ignores the global secondary effects that local choices create.

3. It ignores real human nature - which would be to freeze and be indecisive.

4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.

First, trolley problems contain perfect information about outcomes - which is rarely the case in real life. In real life, you're making choices based on imperfect information. You don't know what would happen for sure as a result of your actions. 

Second, everything creates secondary effects. If putting people involuntarily in harm's way to save others was an acceptable result, suddenly we'd all have to be really careful in any emergency. Imagine living in a world where anyone would be comfortable ending your life to save other people nearby - you'd have to not only be constantly checking your surroundings, but also constantly on guard against do-gooders willing to push you onto the tracks.

Third, it ignores human nature. Human nature is to freeze up when bad things happen unless you're explicitly trained to react. In real life, most people would freeze or panic instead of react. In order to get over that, first responders, soldiers, medics, police, firefighters go through training. That training includes dealing with questionable circumstances and how to evaluate them, so you don't have a society where your trained personnel act randomly in emergencies. 

Fourth, it gives you two choices and no alternatives. I firmly reject this - I think there's almost always alternative ways to get there from here if you open your mind to it. Once you start thinking that your only choice is to push the one guy in front of the trolley or to stand there doing nothing, your mind is closed to all other alternatives.

At best, this means trolley problems are just a harmless waste of time. But I think they're not just a harmless waste of time.

I think "trolley problem" type thinking is commonly used in real life to advocate and justify bad policy.

Here's how it goes:

Activist says, "We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy."

You'll see all the flaws I described above in that statement.

First, it assumes perfect information. The activist says that taking more money will lead to less yachts and cars - useless consumption. He doesn't consider that people might first cut their charity budget, or their investment budget, or something else. Higher tax jurisdictions, like Northern Europe, have very low levels of charitable giving. They also have relatively low levels of capital investment.

Second, it ignores secondary effects. The activist assumes he can milk the cow and the cow won't mind. In reality, people start spending their time on minimizing their tax burden instead of doing productive work. It ripples through society.

Third, it ignores human nature. Saying "the fat cat won't miss it" is false - everyone is loss averse. 

Fourth, the biggest problem of all, it gives two choices and no alternatives. "Tax the fat cat, or the poor people starve" - is there no other way to encourage charitable giving? Could we give charity visas where anyone giving $500,000 in philanthropy to the poor can get fast-track residency into the USA? Could we give larger tax breaks to people who choose to take care of distant relatives as a dependent? Are there other ways? Once the debate gets constrained to, "We must do this, or starvation is the result" you've got problems.

And I think that these poor quality thoughts on policy are a direct descendant of trolley problems. It's the same line of thinking - perfect information, ignores secondary effects, ignores human nature, and gives two choices while leaving no other alternatives. That's not real life. That's sloppy thinking.

Edit: This is being very poorly received so far... well, it was quickly voted up to +3, and now it's down to -2, which means controversial but generally negative reception.

Do people disagree? I understand trolley problems are an established part of critical thinking on philosophy, however, I think they're flawed and I wanted to highlight those flaws.

The best counterargument I see right now is that the value of a trolley problem is it reduces everything to just the moral decision. That's an interesting point, however, I think you could come up with better hypotheticals that don't suffer from this flaw. Or perhaps the particular politics example isn't popular? You can substitute in similar arguments for prohibition of alcohol, and perhaps I ought to have done that to make it less controversial. In any event, I welcome discussion and disagreement.

Questions for you: I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part? I think that's pretty much fact. Now, I think that's bad. Agree/disagree there? Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree? I know this is a bit of controversial argument since trolley problems are common in philosophy, but I'd encourage you to have a think on what I wrote and agree, disagree, and otherwise discuss.

New Comment
113 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There are psychologists, following in the foot steps of Stanley Milgram who set up situations, candid camera style, to see what people actually do. This is very different from asking hypothetical questions.

Taking the trolley problem at face value, we recognise it as the problem of military command. Five regiments are encircled. Staring at the map the general realises that if he commits a sixth regiment to battle at a key point the first five regiments will break-out and survive to fight another day, but the sixth regiment will be trapped and annihilated. Of course the general sends the sixth regiment into battle. The trolley problem is set up to be unproblematic. Sacrifice one to say five? Yes!

So what is it probing? Why do we have difficulty with the pencil and paper exercise when the real life answer is clear cut? We are not in fact generals, chosen for our moral courage and licensed to take tough decisions. We are middle-class wankers playing a social game. If we are answering the trolley problem, rather than asking it, we have been trapped into playing a game of "Heads I win, tails you lose."

The way the game works is that the hypothetical set up is unreasonable, so the... (read more)

1Relsqui
I'm not sure which way you intended it, but I find this a good argument against them. I rarely choose to invent artificial conflict for fun, and never by putting someone else in an uncomfortable position.
[-][anonymous]130

Your argument could be phrased as:

  1. trolley problems are a philosophical tool to help in debate about moral beliefs.
  2. people sometimes use these tools out of context
  3. therefore trolley problems are "a waste of time at best" This doesn't follow. They're only a waste of time at best if they are never used in context or are inefficient then and you didn't discuss that at all.

You should have phrased that as: Even if trolley problems are good at testing moral intuitions in theory, discussing them might make people prone to these errors in real life moral thinking.

Your argument could be phrased as ...

My argument is that putting forward a hypothetical situation with perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options leads to bad thinking.

You should have phrased that as: Even if trolley problems are good at testing moral intuitions in theory, discussing them might make people prone to these errors in real life moral thinking.

On the contrary - I don't think trolley problems are good at testing moral intuitions in theory.

8[anonymous]
Yes, that is what you argue for. But an argument doesn't only contain obeservations, it needs a setup where you put the argument in context and a conclusion where you show how your observations relate to your setup. Your setup is that trolley problems are a theoretical tool but your observations all come from real life situations that simply doesn't match and that diminishes the quality of your argument. And that is what you have in your setup but then don't substantiate. That is what doesn't follow from your observations.

I think trolley problems suffer from a different type of oversimplification.

Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why 'you' ended up being in the situation where you get to control the direction of the trolley.

In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.

(Or if you have a formulation which explicitly mentions the 'mad philosopher' and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)

6byrnema
Exactly. Context is very important.You can't just count deaths. For example, the example AlanCrowe gave above has an obvious answer because the military has a clear context: soldiers have already committed their lives and themselves to being 'one of a number'. Based on the limited information of this trolley problem, I think my answer would have to consider that the entire universe would be a better place if 5 people died being run over by an unwitting machine than 1 person dying because he was deliberately pushed by one of his fellows. Taking the constraints of the trolley problem at face value, one action a person might consider is asking the fat man to jump. If asked, ethically, the man should probably say yes. Given that, I am not sure it would be ethical to ask him. Finally, since the fat man could anticipate your asking, it might be most moral, then, to prevent him from jumping. Thus over the course of a comment, I have surprised myself with the position that not only should you not push the man from jumping, you should prevent him if it occurs to him to do so. (That is, if his decision is impulsive, rather than a life commitment of self-sacrifice. I would not prevent a monk or a marine from saving the 5 persons.)
0erratio
But if he does decide to jump, you have no way to know whether it's because he anticipated your asking or whether he came to that decision independently of you.
0byrnema
Yeah, preventing the man from jumping given a probability that he really, desperately wants to do it might be the only moral dilemma. In the movie, 'A Trolley Problem', he should threaten to kill me if I try to prevent him. Or I should precommit to killing all the people he saves if he saves them, so he must kill me to secure the 5 lives. This would be a voluntary sacrifice of my life to prevent an involuntary sacrifice of life. I suppose 5 people should try to prevent him. If he kills all five of us, he really wanted to do it. (I'm sure exactly where this line of reasoning became inane, but at some point it did.)
0Relsqui
Attempting to prevent him might clarify it. If it's easy to prevent him, he may have just assumed you'd ask. If it's not, it may have been his own idea.

The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!

There are a number of issues.

First and foremost, reversed stupidity is not intelligence. Even if you are granted the substance of your criticisms of the activists position, this does not argue per se against trolley problems as dilemmas. The fact that they share features with a "Bad Thing" does not inherently make them bad.

Secondly, the whole point of cons... (read more)

6Relsqui
This is dangerous, in the real world. If you say "of these two options, I prefer X," I would expect that to be misinterpreted by non-literal-minded people as "I support X." In any real-world situation, I think it's actually smarter and more useful to say something like, "This is the wrong choice--there's also the option of Z" without associating yourself with one of the options you don't actually support. Similarly: Personally, I'm wary in general of the suggestion that I "should" intrinsically have a preference about something. I reserve the right not to have a preference worth expressing and being held to until I've thought seriously about the question, and I may not have thought seriously about the question yet. If I understand correctly, the original poster's point was that trolley problems do not adequately map to reality, and therefore thinking seriously about them in that way is not worth the trouble.
6lionhearted (Sebastian Marshall)
This is strange, this is the second comment that summarized an argument that I'm not actually making, and then argues against the made up summary. My argument isn't against idealization - which would be an argument against any sort of generalized hypothetical and against the majority of fiction ever made. No, my argument is that trolley problems do not map to reality very well, and thus, time spent on them is potentially conducive to sloppy thinking. The four problems I listed were perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options - these all lead to a lower quality of thinking than a better constructed question would. There's a host of real world, realistic dilemmas you could use in place of a (flawed) trolley problem. Layoffs/redundancies to try to make a company more profitable or keep the ship running as is (like Jack Welch at GE), military problems like fighting a retreating defensive action, policing problems like profiling, what burden of proof in a courtroom, a doctor getting asked for performance enhancing drugs with potentially fatal consequences... there's plenty of real world, reality-based situations to use for dilemmas, and we would be better off for using them.
0Joanna Morningstar
From your own summary: Which is to say they are idealised problems; they are trued dilemmas. Your remaining argument is fully general against any idealisation or truing of a problem that can also be used rhetorically. This is (I think) what Tordmor's summary is getting at; mine is doing the same. So, I clearly disagree, and further you fail to actually establish this "badness". It is not problematic to think about simplified problems. The trolley problems demonstrate that instinctual ethics are sensitive to whether you have to "act" in some sense. I consider that a bug. The problem is that finding these bugs is harder in "real world" situations; people can avoid the actual point of the dilemma by appealing for more options. In the examples you give, there is no similar pair of problems. The point isn't the utilitarianism in a single trolley problem; it's that when two tracks are replaced by a (canonically larger) person on the bridge and 5 workers further down, people change their answers. You don't establish this claim (I disagree). It is worth observing that the standard third "trolley" problem is 5 organ recipients and one healthy potential donor for all. The point is to establish that real world situations have more complexity -- your four problems. The point of the trolley problems is to draw attention to the fact that the H.Sap inbuilt ethics is distinctly suboptimal in some circumstances. Your putative "better" dilemmas don't make that clear. Failing to note and account for these bugs is precisely "sloppy thinking". Being inconsistent in action on the basis of the varying descriptions of identical situations seems to be "sloppy thinking". Failing on Newcomb's problem is "sloppy thinking". Taking an "Activists" hypothetical as a true description of the world is "sloppy thinking". Knowing that the hardware you use is buggy? Not so much.
-3Relsqui
If the mistaken summaries are similar to each other, this may mean that the post did not get across the point you wanted it to get across.
1lionhearted (Sebastian Marshall)
Nah, they were totally different summaries. Both used words I didn't say and that don't map at all to arguments I made... it's like they read something that's not there. That, or people mis-summarizing for argument's sake? Either way, it's up to me to get the point across clearly. I thought this was a fairly simple, straightforward post, but apparently not.
[-]knb80

I don't think trolley problems are used to argue for policies. Rather, the point of trolley problems is to reveal that the way humans normally do moral reasoning is not shut-up-and-multiply utilitarianism.

Activist says, "We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy."

While activists may try to trot out utilitarian justifications for their politica... (read more)

I think you are looking at the Trolley Problem out of context.

The Trolley Problem isn't suppose to represent a real-world situation. Its a simplified thought experiment designed to illustrate the variability of morality in slightly differing scenarios. They don't offer solutions to moral questions, they highlight the problems.

3lionhearted (Sebastian Marshall)
I understand the supposed purpose of trolley problems, but I think they're conducive to poor quality thinking none the less. Right, but I think there's better ways of going about it. I wanted to keep the post brief and information-dense so I didn't list alternative problems, but there's a number you could use based in real history. For instance, a city is about to be in lost in war, and the military commander is going through his options - do you order some men to stay behind and fight to the death to cover the retreat of the others, ask for volunteers to do it, draw lots? Try to have everyone retreat, even though you think there's a larger chance your whole force could be destroyed? If some defenders stay, does the commander lead the defensive sacrificing force himself or lead the retreat? Etc, etc. That sort of example would include imperfect information, secondary effects, human nature, and many different options. I think trolley problems are constructed so poorly that they're conducive to poor quality thought. There's plenty of examples you could use to discuss hard choices that don't suffer from those problems.
9Matt_Stevenson
I would compare the trolley problem to a hypothetical physics problem. Just like a physicist will assume a frictionless surface and no air resistance, the trolley problem is important because it discards everything else. It is a reductionist attempt at exploring moral thought.
0lionhearted (Sebastian Marshall)
Interesting thought, but it wouldn't be difficult to take the time to make situations more lifelike and realistic. There's plenty of real life situations that let you explore moral thought without the flaws listed above.
0Relsqui
It isn't necessarily difficult for a good physicist to factor in friction and air resistance, either. But those are distractions, unnecessarily drawing effort and attention away from the specific force actually being studied. That's what the trolley problem also tries to do: create a simplified environment so that a single variable can be examined.

But physicists don't ignore friction when performing experiments, they do so only in teaching. If philosophers used trolley problems only to teach ethics ("Push one fat philosopher onto the tracks, to save two drug addicts.") or to teach metaethics ("An adherent of virtue ethics probably wouldn't push") then I doubt that lionhearted would complain.

But we have psychologists using trolley problems to perform experiments (or, if from Harvard, to publish papers in which they claim to have conducted experiments). That is what I understand lionhearted to be objecting to.

3Relsqui
Excellent point; conceded. (I haven't made up my mind yet about whether I agree with the thesis of the post, so I'm making arguments for both sides as I think of them and seeing which ones get refuted.)
1NancyLebovitz
Nitpick: I think you're implying that no philosophers are drug addicts. Suppose that both the people on the bridge are sufficiently heavy to stop the trolley. Should one of them sacrifice themself, or are both obligated to try to preserve their lives by fighting not to be thrown off?
0Perplexed
Sorry. What I meant to suggest is that drug addicts are thin.
0[anonymous]
Physicists ignore friction when teaching, when thinking, and when performing experiments. Doing so reduces confusion, and allows for greater understanding of the effects of friction once attention is turned to it. The fact that the analogous situation in moral philosophy increases confusion is revealing.
8Perplexed
Yes. It reveals that physicists understand their subject well enough to know what can profitably be ignored ... but moral philosophers do not.
0[anonymous]
I don't disagree.
0Matt_Stevenson
I think a better example than frictionless surfaces and no air resistance would be idealized symmetries. Once something like Coulomb's Law was postulated physicists would imagine the implications of charges on infinite wires and planes to make interesting predictions. We use the trolley problem and its variations as thought experiments in order to make predictions we can test further with MRIs and the like. So a publication on interesting trolley problem results would be like theoretical physics paper showing relativity predicts some property of black holes.
8djcb
The trolley-problem is interesting in that it's a very simple way to show how most people's morals are not based on some framework like consequentialism (any flavor) or deontology or virtue ethics or... but are based on some vague intuitions that are not very consistent - with the ethical frameworks used post-hoc for rationalization. The problem could be complicated (made more realistic) by adding unknowns, probabilities and so on, but would that bring any new insights?

This is actually very similar to an intuition I've had about this problem. The difference is that I compare it to a different scenario, and regard it as a reason, not to reject the trolley problem, but a reason to justify the optimality of not pushing the innocent bystander onto the tracks.

You compared it to wealth-redistribution attempting to optimize total utility, while I think a better comparison is discrimination law, something that matters close to me. (Edit: sorry for awkward phrasing, keeping it there because it was quoted.) In short, just as mil... (read more)

-1Tenek
I've been rereading this comment for the past 10 minutes and I have no idea whether this is an (attempted) arms'-length assessment of discrimination law (I say attempted because of the "matters close to me" acknowledgement) or the bitter result of the author being turned down for a job. At first glance it looks like the latter, but this is exactly the sort of situation I would expect to see someone to make a completely rational analysis and not pay any attention to how it's going to come across to someone who doesn't know you're not just another bigot. (I call it Larry Summers Syndrome. http://en.wikipedia.org/wiki/Lawrence_Summers#Differences_between_the_sexes ) It's one thing to talk about "risk profiles" or "incentives" in general terms, but when you actually want to implement something, it becomes a particular incentive, and there is no a priori reason to assume the cost will outweigh the benefit. When you concentrate on the existence of a cost (or benefit) and ignore the magnitude, you start making statements like "[the Bush tax cuts] increased revenue, because of the vibrancy of these tax cuts in the economy". Similarly, if you try to transfer utility from group A to group B, group A is going to be upset and try to minimize their loss - that doesn't mean that group A is going to completely negate it, or that group B is going to be worse off.
4SilasBarta
Now I don't know what you're trying to say about me. My views on this don't count because you think I was turned down for a job and blamed discrimination law for it? Huh? In any case, I agree that the costs of A's reaction don't necessarily negate the benefit. What I criticize is models that view such utility shifting as a one-shot enterprise without complications, which is a sadly common belief. What's more common -- just from my personal experience -- is that attempts to nobly shift utility this way result in making life more kafkaesque. For example, when you ban IQ tests in employment screening, you don't get employers happily ignoring the information value of IQ tests -- rather, they just fob it off to a university, who will gladly give the IQ test and pass on a weaker measure of ability. If you mandate benefits, employers don't simply continue their hiring practices exactly as before but give workers more utility; rather, they cut back in other ways. I believe I'm impacted by this mentality, because I would much rather be told, you don't qualify because of ____ than have to follow some complicated, unspoken signaling dance (i.e. the relative significance of having a network to having ability) that avoids officially-banned screening methods. Yes, I've been turned down for jobs, but a) I've been gainfully employed in my field for 5 years, and b) my concern is not with being turned down, but with it being harder to find opportunities that could result in being turned down in the first place. What would be a non-bigoted way to make the point I just did?
-2Tenek
"I have no idea what criteria were used when I'm rejected for a job, and I'm not even seeing the jobs that never get posted because it's easier to hire someone you know than go through the formal process and jump through all its hoops." Maybe. I don't think your views don't count - I was hoping that I'd gone to sufficient lengths to point out that while it might have just been bitterness, there was a substantial chance it wasn't. Maybe I underestimated the LW rationalist:racist ratio... actually, probably by a huge margin. %#$@. So what would happen if you traded the kafkaesque life for the officially-banned screening methods? Would you rather have twice the number of job opportunities and lose 3/4 of them right away because you're ? Or would you rather that other people get rejected for them, if you don't personally have many of the 'undesirable' attributes? Finally, let's go to story mode. A friend of mine applied for a job. They weren't allowed to ask her about her religion. But she has a name commonly found among members of a particular one. She got the job, and became the only employee (out of a couple dozen) not sharing the faith of the rest of them. So I guess they took a guess at her religion based on her name, and chose using that metric. I have no idea whether this is a success or failure of antidiscrimination laws. Without them, she'd have had no chance. With them, they tried anyways. But at least it was possible, even if she didn't fit in... and quit a few years later, when her husband got cancer and they blamed it on her not praying.
5SilasBarta
At risk of exhibiting confirmation bias, your anecdote makes my point for me. With the ban on discrimination, your friend got misleading signals of which employers she would like, and her employer relied on weaker signals of things they can't ask about, leading to an incorrect inference -- and later, a disastrous mismatch. So, far from eliminating the pernicious effects of bigotry, the law made people waste effort trying to route their efforts around it. Had there been no law, people could openly communicate their preferences in both directions, without having to go through the complication of sending weaker signals because they can't send the banned ones. And with less "noise", the cost of bigotted preferences becomes clearer. Explanation I don't see why you're using this as an example of why anti-discrimination laws are good. Certainly, you can pick the numbers to reach the conclusion that you want. But averaging over all possibilities, and weighting by likelihood, yes, I would prefer the environment that makes all job applicants not a potential albatross for employers, for the same reason I would prefer not to be exempt from lawsuits -- yes, there's a narrow benefit to laws saying "you can't discriminate against people named Silas", and to a personal exemption from lawsuits. But realistically, the way people respond to this will more than eliminate the benefit. (In case it's not clear -- do you see a hazard in associating with a stranger when you're guaranteed to have no legal recourse against what they do?) So yes, I would rather have the option to most clearly communicate preferences, than have to dance around them, which results in situations where someone can be turned down for a job because of false beliefs that they can't even refute. This remains true, even and especially if I'm in a less fortunate end of the applicant pool (which I have been, like everyone else who has been under 14/16/18 at some point in their life). (Btw, I'm not the one modding yo
3Tenek
I'm not using it as an example of why they're good. I'm offering it as an example because it's relevant to the topic. Adding a cost to circumvent the law makes you less likely to do so, though. If you keep hiring people who are decidedly suboptimal because you have to use a lousy approximation of whatever characteristic you want, you might give up on it. I get that you would rather, given that you're going to be rejected for your age/skin color/gender/etc, be told why. But if you want to reduce the use of those criteria, then banning it will stop the people who care a small amount (i.e not enough to bother getting around it.)

I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:

  1. It assumes perfect information about outcomes.
  2. It ignores the global secondary effects that local choices create.
  3. It ignores real human nature - which would be to freeze and be indecisive.
  4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.

Note that these properties are characteristic of most thought experiments, not just the trolley problem.

Take (3),... (read more)

-2lionhearted (Sebastian Marshall)
I partially agree. But the point is, in any emergency situation, you're going to default to your training if you're acting. Thus, individual moral intuitions give way a host of other concerns, and a body of history, literature, and tradition of the particular discipline (whether it be emergency first response, engineering, soldiering, policing, surgery, or any other form of life or death issue). If you're going to spend the thought cycles, much better to use a real discipline. Here's one - there's two run down apartment buildings with roughly 200 people in them. Mortars were fired off the rooftops the night before, killing ~20 innocent civilians. The next day, military troops raid the buildings, arrest everyone, find a cache of weapons, and strongly suspect the people using them are among the 200 arrested. Everyone says they don't know who did it. What do you do with those people? It addresses the same questions a trolley problem does, except it doesn't have the flaws a trolley problem has.
7jimrandomh
Except that it has a different problem: trying to answer the question quickly derails into complex real-world issues, but you can't reliably predict which real-world issue it will derail into. If you use that example and try to talk about whether it's okay to punish innocents to save others from being mortared, some readers will want to talk about fingerprints and forensics, some will want to talk about how poverty caused the situation in the first place, and some will want to talk about anti-mortar defense hardware. A trolley problem focuses the conversation in a way that real world problems can't, and when talking about philosophical issues that're confusing to begin with, that focus is something you can't do without.
2lionhearted (Sebastian Marshall)
Thank you for replying. The downvotes without reply are confusing - I'm not sure exactly what people take issues with, whether they disagree on a particular grounds or just dislike the point viscerally. Trolley problem do that, but at some expense that I believe can lead to poor quality thought - constraining a situation to two possible decisions with predetermined outcomes. While a little messier, I think forcing people to actually think through a variety of scenarios and be creative is healthier, and you can still get at ethical systems. If you wanted to make it much simpler, there's still ways to do so without being forced into a constrained situation with predetermined outcomes. That's the issue I have - the idea that someone can tell you, "Here's your two options, and here's the outcomes from them" - I think this potentially primes people to listen to false dichotomies later, like in politics. Maybe I'm mistaken, but I don't think so. At least, this is worth considering. Any time you get a false dichotomy with 20/20 foresight presented to you, I think, "That's a false dichotomy and you're claiming 20/20 foresight" would be a good answer. Considering how often even highly educated and smart people fall for the false dichotomy and believing someone who claims they know what the outcomes will be with certainty in advance, I believe this is a legitimate concern.
0prase
Trolley problems aren't conceived as a model of an emergency situation. The emergency part is there mainly to emphasise the restricted choice. To push or not to push, there is no time for anything else. You can easily imagine a contrived trolley scenario with a plenty of time to decide. I don't understand the analogy between trolley problems and your mortar scenario.

Related - Philippa Foot, renowned philosopher and unknown anthropologist. Foot died earlier this month. She was the originator of the trolley problem, in 1967.

The point of using perfect information problems is that they should be simpler to handle. If a moral system can't handle the perfect information problems then it certainly can't handle the more complicated problems where there is a lack of perfect information. In this regard, this is similar to looking at Newcomb's Problem. The problem itself will never come up in that form. But if a decision theory can't give a coherent response to Newcomb's then there's a problem.

JoshuaZ:

The point of using perfect information problems is that they should be simpler to handle. If a moral system can't handle the perfect information problems then it certainly can't handle the more complicated problems where there is a lack of perfect information.

Suppose however that system A gets somewhat confused on the simple perfect-information problem, while system B handles it with perfect clarity -- but when realistic complications are introduced, system B ends up being far more confused and inadequate than A, which maintains roughly the same level of confusion. In this situation, analysis based on the simple problem will suggest a wrong conclusion about the overall merits of A and B.

I believe that this is in fact the case with utilitarianism versus virtue ethics. Utilitarianism will give you clear and unambiguous answers in unrealistic simple problems with perfect-information, perfectly predictable consequences, and an intuitively obvious way to sum and compare utilities. Virtue ethics might get somewhat confused and arbitrary in these situations, but it's not much worse for real-world problems -- in which utilitarianism is usually impossible to apply in a coherent and sensible way.

2[anonymous]
Someone who claims to be confused about the trolley problem with clearly enumerated options and outcomes, but not confused about a real world problem with options and outcomes that are difficult to enumerate and predict, is being dishonest about his level of confusion. A virtue ethicist should be able to tell me whether pushing the fat man in front of the train is more virtuous, less virtuous, or as virtuous as letting the five other folks die.
5Vladimir_M
I think you misunderstood my comment, and in any case, that's a non sequitur, because the problem is not only with the complexity, but also the artificiality of the situation. I'll try to state my position more clearly. Let's divide moral problems into three categories, based on: (a) how plausible the situation is in reality, and (b) whether the problem is unrealistically oversimplified in terms of knowledge, predictability, and inter-personal utility comparisons: 1. Plausible scenario, realistically complex. 2. Implausible scenario, realistically complex. 3. Implausible scenario, oversimplified. (The fourth logical possibility is not realistic, since any plausible scenario will feature realistic complications.) For example, trolley problems are in category (3), while problems that appear in reality are always in categories (1) and (2), and overwhelmingly in (1). My claim is that utilitarianism provides an exact methodology for working with type 3 problems, but it completely fails for types 1 and 2, practically without exception. On the other hand, virtue ethics turns out to be more fuzzy and subjective when compared with utilitarianism in type 3 problems (though it still handles them tolerably well), but unlike utilitarianism, it is also capable of handling types 1 and 2, and it usually handles the first (and most important) type extremely well. Therefore, it is fallacious to make general conclusions about the merits of these approaches from thought experiments with type 3 problems.
2[anonymous]
I am not a utilitarian. I agree with this similar statement: communities of people committed to being virtuous have good outcomes (as evaluated by Sewing-Machine). I do not agree with this similar statement: communities of people committed to being virtuous are less confused about morality than I am.

Trolley problems appear not just in philosophy - some psychologists are using them in experiments as well. Here is one recent example.

In this case, at least, I think that many of your objections to the trolley problem don't apply. The researchers really are not interested in the ethics of deciding to sacrifice a fat man, they are interested in how the decision to sacrifice might change when the decision maker is on various drugs. And they already have brain imaging results for the trolley problem - so of course they would want to use the same problem in this experiment.

Isn't The Least Convenient Possible World directly relevant here? I'm surprised it hasn't been mentioned yet.

8shokwave
It occurs to me that the Least Convenient World principle, and its applications in producing trolley problems, is actually a dangerous idea. The best response in any situation that looks like a trolley problem is to figure out how to defuse the situation. So maybe you can change the tracks so the trolley runs down a different line; maybe you can derail it with a rock on the tracks; maybe you can warn the five people or somehow rescue them; perhaps even you could jump onto the trolley and apply the brakes. These options are surely less feasible than using the fat man's body, but the cost of the 'fat man' course of action is one life. (Naively, if the expected outcome of the third way is less than 1 life lost, the third way is preferable) This is a little bit like that Mad Psychologist joke: The trolley problems tend to forbid this kind of thinking, and the Least Convenient Possible World works to defeat this kind of thinking. But I think that this third-way thinking is important, that when faced with being gored by the left horn or the right horn of the bull, you ought to choose to leap between the horns over the bull's head, and that if you force people to answer this trolley problem with X or Y but never Z, they will stop looking for Zs in the real world. Alternatively, read conchis's post , as it is far more succinct and far less emotive.
0NancyLebovitz
I don't know if your alternatives are that much less plausible than thinking you can throw someone who weighs a good bit more than you do and is presumably resisting, and have them land with sufficient precision to stop the trolley.
0shokwave
I rather think they are more plausible and will save lives more surely than the fat man's corpse, but the thought experiment strongly implies that the fat man course of action will surely succeed - and I wanted to make my point without breaking any of the rules of the thought experiment, so as not to distract critics from the central argument.
-3prase
I strongly suspect that since many people don't like whatever conclusions can be infered from trolley problems, they try to dismiss trolley problems as "dangerous". If I find something really dangerous, it is the willingness to label uncomfortable ideas as dangerous when there are no better arguments around. The historical set of "dangerous" ideas includes heliocentrism, evolution, atheism, legal homosexuality. Actually, nobody has yet demonstrated that in reality, people who are used to think about trolley problems or other simplified thought experiments are more prone to bad thinking.
  1. It assumes perfect information about outcomes.
  2. It ignores the global secondary effects that local choices create.
  3. It ignores real human nature - which would be to freeze and be indecisive.
  4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.

I broadly agree with this, but there's another reason trolley problems are flawed. Namely; it is hard to deconvolute one's judgment of impracticality (a la 4) from one's judgment of moral impermissibility. Pushing a fat guy is just such an implausibly stupid way to st... (read more)

3ata
I wonder if it's better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you're asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say "An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?" Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
8Perplexed
Your version adds an irrelevancy - the possible moral agency of the demon provides an out for the test subject: "It is not my fault those 5 people died; the demon did it." It is much more difficult to shift moral responsibility to the trolley.
3ata
Good point, though that still tests whether a person thinks of "moral agency" as a relevant factor in deciding what to do.
0hacksoncode
I'm not sure why it's perceived as more difficult. The trolley didn't just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
0TheOtherDave
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain. For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous... if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won't bother. Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won't bother. I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
4Alicorn
Nitpick: why can't I leave the hostage tied to the chair without shooting her?
3ata
I begin to suspect that it's impossible to come up with a moral dilemma so implausibly simplified that nobody can possibly find a way to nitpick it. :P (Though that one was just sloppiness on my part, I admit.)
0Relsqui
Or untie her and then shoot her? ;)
0Relsqui
Well, if you were a superintelligence from another galaxy who was nearly omniscient, what would YOU do with it?
[-]ata120

Make paperclips, of course.

0William
Fondly regard creation.
2torekp
Problem #6: the situations are almost invariably underspecified. (Problem 2 is a special case of this.) The moral judgments elicited depend on features that are not explicit, about which the reader can only make assumptions. Such as, how did the five people get on the tracks? Kidnapped and tied up by Dick Dastardly? Do they work for the railroad (and might they then also be responsible for the maintenance of the trolley)? And so on. When a researcher uses contrived problems to test people's moral intuitions, it would help to include a free-form question inviting the respondent to say what other information they need to form a moral judgment. That way, the next time the "trolley problem" is trotted out, the researchers will be in a better position to understand which features make a difference to the moral verdicts. ETA: didn't see MatthewW's similar point until after I replied.

Yes, trolley problems are simple and reality is complex. We know this. The point of a trolley problem is not to provide a complete model for decision making in general, but to extract single data points about our preferences. imperfect information must be dealt with using probability distributions and expected utility; secondary effects must be included in our expected utility calculations. Indecision is irrelevant, in the same sense that someone facing Omega's problem with a time limit might leave someone with zero boxes. And of course, we do have to spen... (read more)

2fche
I think the point is that such a mind-killing problem doesn't answer anything. A reasonable person may say "both puzzle options are awful, I don't want to play", but that doesn't mean that the same person can't give a moral argument for action or inaction in a more realistic situation.
[-]taw30

Here's better shorter version of your post:

  • Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
  • Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.

Also, politics is the mind-killer, don't use examples like that if you can help it.

No vote; post was at +2 and that seems appropriate to me.

Trolley problems have four weaknesses: true.

It's bad that trolley problems have weaknesses: sure, but you didn't propose an alternative way of forcing people to reason about thorny moral problems. Criticizing trolley problems without proposing an alternative is like criticizing liberal democracy without proposing an alternative -- easy, valid, and pointless.

Trolley thinking seeps into politics: highly unlikely; most people-thoughts about politics are had by people who remember nothing from any philos... (read more)

2[anonymous]
With that comment, I have to ask about what you thought about the Less Wrong post about how critic's do matter, even when they don't always have alternatives.
2Mass_Driver
You mean you missed it?
2Relsqui
Got a citation, or just a snarky opinion?
0wnoise
Most people don't have philosophy classes.
0Mass_Driver
Just a snarky opinion. But if you disagree, stop 10 people on any street that isn't in a college town and politely ask them if they know what a trolley problem is. Bet you 2 or less say "yes." Alternatively, ask them if they know of a way of pushing people to give an answer to a precise ethical question without dodging the dilemma. Bet you 1 or less can name such a method.
1Paul Crowley
I found this comment really useful, thanks!

I think that trolley problems contain perfect information about outcomes in advance of them happening, ...

True.

... ignore secondary effects, ...

Depends on what you mean. The problem is stated with a simple question: do you push the lever / fat man? You are not instructed to ignore whatever effects it may have. A trolley problem may be stated with some additional remark like "nobody will ever learn about your choice" which can implicitly suggest to ignore some possible real-world effects, but that isn't inherently present in every trolley ... (read more)

If you can come up with better hypotheticals, please do present one. The trolley-one is used so often most likely because there are no good alternatives.

Also, you seem to have missed the point of the trolley problem as it has been presented. The main point in thinking about it is that you notice that a), you have ethical intuitions, and b), they can change to polar opposite if the starting position is altered even a bit. That's just healthy experimentation that's supposed to provoke thoughts. I can't see how changing this example would prevent people from justifying their thoughts in politics by misguided simplifications.

An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don't push the other guy.

But if you're not fat enough, then yes, of course you push the fat guy onto the tracks, without hesitation. And you plead guilty to his murder. And you go to jail. One person dying and one person going to jail is preferable to 5 people dying. Or, if you're so afraid of going to jail that you would rather die, then you can also jump onto the tracks after pushing the f... (read more)

6Vladimir_Nesov
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different and you can't easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
1Richard_Kennaway
If that's just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want -- you will anyway. If not: You can very easily argue that, and most moral systems do, including utilitarianism. (Eg. greater love hath no man etc.) You may personally not like the conclusion that you should jump in front of the trolley, but that doesn't change the calculation, it just means you're not doing the shutting up part. Or as it is written, "You gotta do what you gotta do".
3Vladimir_Nesov
Not at all necessarily, as you can also err. It's not easy to figure out what you want. Hence difficult to argue, as compared to 5>1 arbitrary people. But not as easily, and I don't even agree it's a correct conclusion, while for 5>1 arbitrary people I'd guess almost everyone agrees.
-2Richard_Kennaway
That just the part where one fails to shut up while calculating. No version of moral utilitarianism that I've seen distinguishes the agent making the decision from any other. If the calculation obliges you to push the fat man, it equally obliges you to jump if you're the fat man. If a surgeon should harvest a healthy person's organs to save five other people, the healthy person should volunteer for the sacrifice. Religions typically enjoin self-sacrifice. The only prominent systems I can think of that take the opposite line are Objectivism and Nietzsche's writings, and nobody has boomed them here.
5Vladimir_Nesov
Just from knowing that A and B are some randomly chosen fruits, I can't make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it's known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it's not possible to have a preference between two people based on identical states of knowledge about them, but it's possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
-2Richard_Kennaway
The trolley problem isn't about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
2Vladimir_Nesov
What's that about, if not what people prefer?
0Richard_Kennaway
I already pointed out that most moral principles do not specially favour the agent, while most people's preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from "what people prefer". But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
2Vladimir_Nesov
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions? Note that having preference for your own life over lives of others could still lead to decisions similar to those you'd expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could've died so that you could've lived). It's not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
7NihilCredo
Acausal relationships between human agents are astronomically overestimated on LW.
0Vladimir_Nesov
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably - I still don't have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won't have enough acausal control over others.
2NihilCredo
Fair enough. The only small note I'd like to add is that the phrase "if all people acted according to a [sufficiently] better decision theory" does not seem to quite convey how distant from reality - or just realism - such a proposition is. It's less in the ballpark of "if everyone had IQ 230" than in that of "if everyone uploaded and then took the time to thoroughly grok and rewrite their own code".
0Vladimir_Nesov
I don't think that's true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that's desirable. If you are precommited to choosing option A no matter what, it doesn't matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
1NihilCredo
You cannot precommit "no matter what" in real life. If you are an agent at all - if your variable appears in the problem - that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious - possibly the rulemaker's tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. "X sounded like a good idea at the time", even if X is carjacking a bulldozer.)
1Vladimir_Nesov
This is not a problem of IQ.
4nerzhin
This implies that if you are designing an AI that is expected to encounter trolley-like problems, it should precommit to eating lots of ice cream.
3PeerInfinity
ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.
0shokwave
All this shows is that it's possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment? I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options - almost the beginnings of a proof of incompleteness of moral theory. Of course, it doesn't show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete - but then, something tells me that given a moral theory, it wouldn't be hard to describe a trolley problem that is unsolvable in that theory.

Well, if the point of trolley problems is to gain some insight as to how we form moral judgments, I don't necessarily do even that particularly well. I suspect they don't even do that particularly well, since I suspect many respondents are going to give what they think is the approved answer, which is possibly different from what they would actually do. At best they provide insight as to why we might think of certain actions as moral or immoral.

But I sometimes see things like trolley problems used argue that there is something wrong with peoples' decision... (read more)

Your "ignore secondary effects" claim is weak - trolley-type situations would happen very rarely and there'd be no point in responding.

It's bad but necessary to get to the fundamental moral issue.

I don't think it seeps into politics. Similarity does not imply causation.

Couple of typos:

  1. It the global secondary effects that local choices create.

Your sentence no verb.

We've got to take this rich fat cat and give it to these poor people

Give the cat to the poor people?

0lionhearted (Sebastian Marshall)
Cheers. I fixed the first one, but missed the second one. Both fixed now.
0Relsqui
Shame. I wouldn't mind getting a cat. :)

I also think that you've misunderstood the significance of the trolley problem. As it happens, I was already intending to write a new post on the trolley problem when I came across this looking for related articles, so here it is, and I hope it explains why I find this post to be a worrying type of response to the dilemma.

Ceteris paribus, I cannot imagine what number of people stupid enough to be sitting about on a train track (probably violating property rights for a start) it takes before saving them becomes worth the sacrifice of the average bystander.

6jimrandomh
Least convenient possible world: replace "sitting on a train track" with "tied to a train track by a villain".
2oliverbeatson
This happened to me just last week. In response to the downvote: Hmm, I wonder what fraction of people on railway tracks are there because they are reckless and what fraction are victims who are not generally at fault? I assumed my 'ceteris paribus' covered this sufficiently but perhaps villainous train-plots are more the norm than I thought. Given this, subsidising the risk of recklessly hanging around train tracks by having a policy of sacrificing innocent bystanders to stop trains will only prevent the emergence of a mechanism whereby people don't end up on train tracks, which is fairly surely, on net, suboptimal to the cause of not having people die in train accidents. Alternatively, I may have missed the point. This appeals to me as a possibility.

I voted up. Post makes good sense to me.

You make the claim that trolley problems ignore human nature and are thus conducive to sloppy thinking. It is claimed that people who know of trolley problems are more likely to be "anarchists and libertarians" and less likely to accept tyrrany. Ignoring the fact that this is orthogonal to your point, I would endorse the stronger claim that people who know of trolley problems are in general better thinkers.

On the other hand, people who know trolley problems have probably taken a university philosophy class and are thus in a totally different de... (read more)

[-][anonymous]00

I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part?

"false constratins" carries a negative connotation. Here you setup your emotional argument later. This site is about rationality and you made a mistake here. Even if it is understandable and common human thinking it is that which is "sloppy thinking". Otherwise this premise is more or less correct.

Now, I think that's

... (read more)
[-][anonymous]00

Tangent - is there any way, like Reddit, to see how many upvotes and downvotes this has gotten? I see it yo-yoing back and forth, people don't seem neutral about this one. I'd be curious to see the exact numbers.

[-]PlaidX-30

Northern europe is really nice, though...