It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.
Er, you're attaching too much value to hypothetical philosophical questions.
I'd have thought it obvious that they're dodging the question so as to avoid the possibility of the answer being taken out of context and used against them. Lose-lose counterfactuals are usually used for entrapment. This is a common form of hazing amongst schoolchildren and toward politicians, after all, so it's a non-zero possibility in the real world. It's the one real-world purpose contrived questions are applied to.
tl;dr: you have not given them sufficient reason to care about contrived trolley problems.
Er, you're overestimating how much value the other person attaches to hypothetical philosophical questions.
FTFY
Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.
Of course we do. It would be crazy to answer such a question in a social setting if there is any possibility of avoiding it. Social adversaries will take your answer out of context and spin it to make you look bad. Honesty is not the best policy and answering such questions is nearly universally an irrational decision. Even when the questions are answered the responses should not be considered to have a significant correlation to actual behaviour.
I think I have a more plausible suggestion than the "spin it to make you look bad"
Think evolutionarily.
It absolutely sucks to be a psycho serial killer in public, if you are into making friends and acquaintances and likely to be a grandpa.
It sucks less to show that you would kill someone, specially if you were the actor of the death.
It sucks less to show that you would only kill someone by omission, but not by action.
It sucks less if you show that your brain is so well tuned not to kill people, that you (truly) react disgusted even to conceive of doing it.
This is the woman I want to have a child with, the one that is not willing to say she would kill under any circumstance.
Now, you may say that in every case, I simply ignored what would happen to the five other people (the skinny ones). To which I say that your brain processes both informations separately,"me killing fat guy" "people being saved by my action" and you only need one half to trigger all the emotions of "no way I'd kill that fat guy"
Is this an evolutionary nice story that explains a fact with hindsight. Oh yes indeed.
But what really matters is that you compare this theory with the "distortion" theory that many comments suggested. Admit it, only people who enjoy chatting rationally in a blog think it so important that their arguments will be distorted. Common folks just feel bad about killing fat guys.
then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from
No they wouldn't. Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn't they shut up?
EDIT: In case it's not clear, I consider this tactic borderline Dark Arts (please note who originally said that ambiguity-ally line in HPMOR!), a purely political weapon with no role in conversations trying to be rational. I wouldn't criticize its use as a defense against some political nitwit who's trying to hurt you in front of an inexperienced audience; I would be unhappy with first-use of it as a primary political strategy.
I'd be interested in a trolley version of the Asch conformity experiment: line up a bunch of confederates and have them each give an answer, one way or another, and act respectfully to each other. Then see how the dodge rate of the real participant changes.
Then you could set it up so that one confederate tries to dodge, but is talked out of it. Etc.
Giving either response can be harmful if you are trying to avoid the disapproval of someone who fails at conservation of expected evidence. (This failure could happen even to us rationalists who are aware of the possibility, by simply not thinking about how we would interpret the alternative response we did not observe, especially if our interpretation is influenced by a clever arguer who wants us to disapprove.)
If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.
You appear to be saying "but they could give a perfect zinger of an answer!" Yes, they could. But refusing the question - "Homey don't play that" - is quite a sensible answer in most practical circumstances, and may discourage people from continuing to try to entrap them, which may be better than answering with a perfect zinger.
If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.
It may be easier in the short term but in the future it will come back to haunt you with sufficient probability for it to dominate your decision making. Never answer moral questions honestly, lie (to yourself first, of course). If there is no good answer to give the questioner then avoid the question. If possible, make up the question you wish they asked and answer that one instead. Don't get trapped in a hostile frame of interrogation.
Mere signaling fails to account for many of these cases.
When it comes to morality there is nothing 'mere' about signalling. Signalling accounts for all of these cases.
Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright.
Counterfactual resistance is pretty common with all thought experiments, indeed it is the bane of undergraduate philosophy professors everywhere. We have no evidence that resistance is more common in ethical thought experiments or the trolley problem particularly than in thought experiments in other subfields: brain-in-vat hypotheticals, brain-transplant/hemisphere transplant, teleportation, Frankfurter cases etc. Which is to say most of this post is in need of citations. Maybe people just don't like convoluted thought experiments! I'm not even sure it's the case that many people do refuse to answer the question- how many instances could you possibly be basing this judgment on?
Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms.
How do you know this? I'm not demanding p-values but you haven't given us a lot to go on.
The purpose of thought experiments and other forms of simulation is to teach us to do better in real life. Obviously, no simulation can be perfectly faithful to real life. But if a given simulation is not merely imperfect but actively misleading, such that training in the simulation will make your real performance worse, then rejecting the simulation is a perfectly rational thing to do.
In real life, if you think the greater good requires you to do evil, you are probably wrong. Therefore, given a thought experiment in which the greater good really does require you to do evil, rejecting the thought experiment on the grounds of being worse than useless for training purposes, is a correct answer.
The purpose of thought experiments and other forms of simulation is to teach us to do better in real life.
Not at all. That's way too broad a claim and definitely not the case for the trolley problem. The purpose of the trolley problem is to isolate and identify people's moral intuitions.
I've used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you're the mother of a baby in a village in Vietnam, and you're hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they'll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.
I usually save this one for people who smugly answer both trolly questions with "they're the same, of course I'd kill one to save 5 in each case", but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I'm not sure why this works so well, but I think our bias...
This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don't flip the switch.
I immediately thought, "Kill the baby." No hesitation.
I happen to agree with you on morality being fuzzy and inconsistent. I'm definitely not a utilitarian. I don't approve of policies of torture, for example. It's just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.
"Remember, you can't be wrong unless you take a position. Don't fall into that trap." - Scott Adams
An implicit assertion underlying this post seems to be that the sorts of people who answer trolley problems rather than dodge them are more likely to take action effectively in situations that require doing harm in order to minimize harm.
Or am I misunderstanding you?
If you are implying that: why do you believe that?
they haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions.
Perhaps they have had bad experience with "a systematic way of dealing with problems that lack ideal solutions."
Hard cases make bad law is a well known legal adage. There is, I think, some wisdom exhibited in resisting systematizers armed with trolley problems.
I get frustrated by this every time someone mentions the classic short story The Cold Equations (full text here). The premise of the story is a classic trolley problem (...In Space!), where a small spaceship carrying much-needed medical supplies gets a stowaway, which throws off its mass calculations. If the stowaway is not ejected into space, the ship will crash and the people on the planet will die of a plague. So the (innocent, lovable) stowaway is killed and ejected, and the day is saved. The end.
Whenever this comes up, somebody will attack the story a...
Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.
So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murd...
Having posted lots in this thread about excellent reasons not to answer the question, I shall now pretend to be one of the students that frustrates Desrtopa so and answer. Thus cutting myself off from becoming Prime Minister, but oh well.
The key to the problem is: I don't actually know or care about any of these people. So the question is answered in terms of the consequences (legal and social) to me, not to them.
e.g. in real life, action with a negative consequence tends to attract greater penalties than lack of action. So pushing one in front to save fiv...
As an ethicist who routinely rejects trolley problems, I feel I must respond to this.
The trolley problem was first formulated by Philippa Foot as a parody of the ridiculous ethical thought experiments developed by philosophers of the time. Its purpose was to cause the reader to observe that the thought experiment is a contrived scenario that will never occur (apparently, it serves that purpose in most untrained folks), and thus serves as an indictment of how divorced reasoning about ethics in philosophy had become from the real world of ethical decision-m...
I think you are overly generalizing against people who don't like or don't understand philosophy.
even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.
I am a conscientious "third-alternativer" on trolley problems, and to me this seems like an abuse of the least convenient possible world principle. If there is a world with no possibility of implementing alternative solutions, I will pick the outcome with the best consequences, but I don't believe there actually is a world with no possibility of altern...
I am an atheist, and I have no problems in answering questions of type "if creationism were true, would you support its teaching in schools" or ''if Christian God exists, would you pray every day" (both answers are yes, if that matters). What's the problem with those hypotheticals? The questions are well formed, and although they are useless in the sense that their premise is almost certainly false, the answers can still reveal something about my psychology. I don't think answering such questions would turn me into a creationist.
I'd honestly find the far more plausible answer to be that people just have trouble with truly direct, unambiguous communication. My own experience is that either I'm very bad at such communication, or else other people are very bad at receiving it. When I ask extremely specific questions, people will usually assume a more generalized motive to asking it, and try to answer THAT question. I've had conversations with very smart people who kept re-interpreting my questions because they assumed I was trying to make a specific point or disprove some specific de...
Kind of late to get back to this, but
The Trolley scenario is a strong binary decision with perfect information and absolutely no creative thinking or alternate solution possible. Do you really think that comes up frequently in real life? If not, why not use an exercise that accommodates and praises creative solutions instead of rejecting them as being outside the binary scope of the exercise?
Real life trolleylike dilemmas are generally ones where creative thinking has already been done, but has not turned up any solutions without serious downsides. In such cases, deferring the decision for a perfect solution, when enough time has been dedicated to creative thinking that more is unlikely to deliver a new, better solution, is itself a failure condition.
The top 10% of humanity accumulates 30% of the worlds wealth. 20% of the humanity dies from preventable, premature death (and suffers horribly)
The proposition...
...10% of the top 10% had all their wealth taken from them (lottery selection process) They are forced to work as hard and effectively as they had previously and were given only enough of the profits they produce to live modestly. They lose everything and work for 5 years and recieve 10% of original wealth back The next 10% of the top 10 % is selected The wealth taken is used to ensure the
It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.
We live in a world where most people refuse complicity in a disaster in order to "maintain a certain quality of life even though it costs many lives".
Perhaps this is the reason for opting out of answering the question, acting is just to hard. The decision and its consequences is for s...
If you're a consequentialist, trolley problems are entirely irrelevant.
I think there have posts about this before. Well, this and the "if it's not my responsibility, it's not my problem" mindset, which the trolley problem also touches on.
It dawns on me that there is a much more general tendency among most people to try to bail out of moral dilemmas or other hypotheticals. I personal experience sometimes I wish it was socially accepted to shout "Stop making up alternate courses of action in my thought experiments!" but alas we all have to deal with the single inference step.
(Is there a generalization of that "take a third option" tendency on dilemmas and hypothetical situations?)
Excellent post. Seems to me that your points about how people react to moral problems apply to decision problems as well.
The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.
However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.
However, in most cases, these excuses are not their true rejection. Those who tried to find third options or appeal to their emotional state will continue to reject the dilemma even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.
Those who appealed to the unlikelihood of the scenario might appear to have the stronger objection; after all, the trolley dilemma is extremely improbable, and more inconvenient permutations of the problem might appear even less probable. However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions, and attempting to optimize the outcome demands increased complicity in the dilemma. This method of framing the problem also tends not to cause people to reverse their rejections.
Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.
When the respondents feel that they can possibly opt out of answering the question, the implications of the trolley problem become even more unnerving than the results from past studies suggest. It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all. They have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes.