I think this illustrates a major limitation to the near/far concept: it's totally general and you just revised your beliefs about what constitutes near and what constitutes far because your judgement changed. In other words, it's impossible to say that near and far don't matter, because when you find a difference, you then decide that you're using different faculties. Without a reasonably clear framework of what constitutes near and what constitutes far, there's a huge tendency to use it to describe whatever you happen to feel like, and then arbitrarily revise it if your feelings change, i.e. "I thought shoving him was near, but, since it would have a greater effect on my behaviour, saying I would shove him when he's sitting in the room is actually near."
You are really going to take a concept worked out in dozens of academic papers and declare it meaningless because you have trouble figuring out how to apply it in one particular context?
"A major limitation" != "meaningless"
The near-far view is interesting and useful, but it seems very susceptible to just-so stories, and it doesn't appear to have a great deal of consistent predictive power. I've seen few cases where someone said, "OK, we have X problem. The near view would suggest Y, the far view would suggest Z, let's look at how people actually think." Rather, you get "People think both Y and Z about X. Therefore, people who think Y must use the near view, and people who think Z must use the far view." This is problematic, but it doesn't make the system meaningless.
Also, the issue isn't that I have trouble applying it in one particular context, but, rather, that there appears to be a problem in formalizing how it should be applied. Now, perhaps there is a clear methodology that would make near/far falsifiable, but I've certainly never seen it used in your writings or anyone else's.
I also find it interesting that the existence of academic papers on the subject is the gold standard of evidence you apply, particularly because I am not contesting facts, but rather arguing that a particular explanatory framework has less practical use than its proponents seem to think. There were dozens of academic papers many now-discredited explanatory frameworks (phrenology?Freud? Pre-20th century racial theories?). This is, indeed, a major problem with explanatory frameworks: they're general enough that they often stick around long after they should have died, like most of what Freud wrote. Citing academic papers to support an ideological framework is simply nothing like citing them to support empirical facts, particularly in an area as fuzzy as estimating the thought processes of large numbers of people.I find it telling that your defense is not, say, an efficient summary of the idea or its usefulness or evidence supporting its predictive value, but rather a vague appeal to authority.
No, I wasn't declaring it meaningless.
My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?
And
I was illustrating that with what I hoped was an amusing anecdote -- the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.
It's easy to talk about sacrificing one person's life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk - decisions may be made emotionally and then rationalised afterwards.
Look, I wasn't hoping to provoke one of Eliezer's 'clicks', just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.
I thought most people chose not to push the fat man because there is no conceivably realistic way that a fat man could stop a train, even one as small as a trolley. Although the thought experiment tells us the fat man will stop the train, our knowledge of trains tells us that nothing stops trains. When I envision this scenario, I can't help but (realistically) imagine the trolly hitting the fat man, then continuing on and running over the five others.
See also: Ends Don't Justify Means (Among Humans)
Yes. People get bogged down with the practical difficulties. Another common one is whether you have the strength to throw the stranger off the bridge (might he resist your assault and and even throw you off).
I think the problem is the phrasing of the question. People ask 'would you push the fat man', but they should ask 'SHOULD you push the fat man'. A thought experiemnt is like an opinion poll, the phrasing of the question has a large impact on the answers given. Another reason to be suspicious of them.
Okay, this is getting annoying. I've mostly ignored "near vs. far" topics because I don't know what the metaphorical meaning of the two is. Then, when I went to the LW wiki to be enlightened so I can understand these topics, what do I get?
NEAR: All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. FAR: Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.
So ... one of them goes with a bunch terms that have some vague relationship to each other, and the other one ... um, does the same with different terms. Was that supposed to somehow be helpful?
Anyway, I don't know how to translate this into near and far, but here's my answer to the trolley problem:
Workers on the track consented to the risks associated with being on a trolley track, such as errant trolleys. (This does NOT mean they deserved to die, of course.) Someone standing above the track on a bridge only consented to the risks associated with being on a bridge above a trolley trolley track, NOT to the risk that someone would draft him for sacrificial lamb duty on a moment's notice.
By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs "like that one guy did a few months ago" and have to go to greater lengths to get the same level of risk.
In other words, all the "prediction difficulty" costs associated with randomly changing the "rules of the game" apply. Just as it's costly to make people keep updating their knowledge of what's okay and what isn't, it's costly to make people update their knowledge of what's risky and what isn't (and to less efficient regimes, no less).
That is what differentiates pushing a fat guy off, from diverting one track to another. I don't pretend that that is what most people are thinking when they encounter the problem, but the "unusualness" of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role. And of course, you have to factor in the fact that most people are responding on the fly, while the creator of the dilemma had all the time in the world to trip up people's intuitions.
This is not to say there aren't real moral dilemmas with the intended tradeoff. It's just that, like with the Prisoner's Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.
Workers on the track consented to the risks associated with being on a trolley track, such as errant trolleys. (This does NOT mean they deserved to die, of course.) Someone standing above the track on a bridge only consented to the risks associated with being on a bridge above a trolley trolley track, NOT to the risk that someone would draft him for sacrificial lamb duty on a moment's notice.
That's missing the point of the dilemma. You can assume that they're not workers and that they didn't consent to any risks. This problem isn't about assumption of risk, it's about how people perceive their actions as directly causing death, or not.
That's missing the point of the dilemma. You can assume that they're not workers and that they didn't consent to any risks.
Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn't. By choosing to be on top of a track, you are choosing to take the risks. It doesn't mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.
This problem isn't about assumption of risk, it's about how people perceive their actions as directly causing death, or not
Why do people talk like this? It's a bright red flag to me that, to put it politely, the discussion won't be productive.
Attention everyone: you don't get to decide what a problem is "about". You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be "about" topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can't come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don't get to say, "this problem is 'about' how people perceive their actions' causation, all other arguments are automatically invalid".
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
Then it's wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate.
It is certainly possible that there is some underlying utilitarian rationale being used.
The reasoning I gave wasn't necessarily utilitarian -- it also invokes deontological "you should adhere to existing social norms about pushing people off trolleys". My point was that it still makes utilitarian sense.
Attention everyone: you don't get to decide what a problem is "about". You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be "about" topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can't come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.
No. If you know what point someone was trying to make, and you know how to change the scenario so your reason why it doesn't count no longer applies, then you should assume The Least Convenient Possible world for all the reasons given in that post.
True, and people should certainly try that, but sometimes the proponent of the dilemma is so confused that switching to the LCPW is ill-defined or intractable, since it's extremely difficult to remove one part while preserving "the sense of" the dilemma.
That's what I think was going on here.
I think you missed this part:
This is not to say there aren't real moral dilemmas with the intended tradeoff. It's just that, like with the Prisoner's Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.
Silas is saying that the Least Convenient World to illustrate this point requires lots of caveats, and is not as simple as the scenario presented.
You can assume that they're not workers and that they didn't consent to any risks.
This is still not inconvenient enough. They are still responsible for being on the track, whether by ignorance or acceptance of the risks.
You can assume that they're not workers and that they didn't consent to any risks.
This is still not inconvenient enough. They are still responsible for being on the track, whether by ignorance or acceptance of the risks.
I usually assume that they were kidnapped by crazed philosophers and tied to the tracks specifically for the purpose of the demonstration.
Okay, but that would be a fundamentally different problem, with different moral intuitions applying. The question becomes, "should five kidnapped people die, or one fat kidnapped person die?"
Silas, you're right: the problem was poorly stated in the lecture referenced by the original post. The trolley car problem is in fact usually written to make it clear that the five people did not assume any risk. The original intent of this kind of problem was to explore intuitions dealing with utilitarianism.
NB: "The trolley problem" does not uniquely describe a problem. While it does refer to Foot's version from 1978, it also refers to any of the class of "trolley problems", hundreds of which have been in published papers since then.
Much like "Gettier case" does not uniquely identify one thought experiment.
Okay, that's actually the first time I'd seen the Trolley problem involve a "mad philosopher" (or equivalent concept) having tied them to the track, and that includes my previous visits to the Wikipedia article!
And even the later expositions in the article involving a fat man don't mention people being kidnapped.
Well, I didn't edit the article! I think you're right about the assumption of risk version.
I do prefer the "mad philosopher" versions, because they make the apparently contradictory preferences very clear. That way, you're weighing 5x against x. Most people have an intuition that it would be wrong to push the fat man, yet right to change the course of the trolley, which seems strange.
I would still think that it would be bad for people to have to worry about being drafted as sacrificial lambs because other people could not avoid being kidnapped by crazed philosophers.
One of the implications of the crazed-philosopher setup, though, is that there are well-enforced laws against tying people to railroad tracks, so that should be a rare occurrence, not something that people should have to take into consideration in their day to day lives. (So should 'workers on a section of track that's not protected from trains', actually - OSHA would have something to say about it, I'm sure. I still prefer the crazed philosophers, though. They're funny.) You do have a point, but that's an issue that we as a society have already resolved in many cases.
I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest--in this case, the decision to push or not, or to switch tracks--and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.
I'd say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.
I don't pretend that that is what most people are thinking when they encounter the problem, but the "unusualness" of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role.
I don't know, lot of people talk about how he's "not involved" or "innocent" or how you should involve people who aren't already part of the problem - it's the same as the one with the guy with healthy organs and the dying transplant patients.
Obviously, replacing the lever with the fat man complicates the analysis beyond a simple payoff matrix.
There is not enough time to reliably convince yourself that your perceived consequences of pushing, and the associated payoff matrix, are accurate.
The train is too literally "near", and it is perfectly rational to default to your extremely useful, and emotionally forceful, "don't push people off bridges" heuristic.
The interesting point about this trolley business never gets mentioned; I'll have to do it myself. It comes when you're on the bridge- YOOO could be the individual thrown over. Thus you need your philosophical theory well worked out to be quickly persuasive as the big strong man next to you wonders what he should do. An example of how philosophy is vital to life.
Ethical dilemmas don't have to involve killing: firefighters are also trained to make rational (rather than emotive) life and death decisions: it may be better to leave resuscitating the baby who is seriously injured and concentrate instead on rescuing 2 adults still caught in the wreckage. Here training has an impact on the nature of the decision making process. Indeed, I recently heard the wife of a firefighter say that she had noticed his rational mode of decision making spill over into his personal life as his training became ingrained in his psyche.
Botogol, I enjoyed the piece immensely and found that it made me reconsider my own instinctive "of course you would push the fat man" response, having done the maths. If I truly, actually, honestly imagine myself in the exact situation, with a particular fat man in front of me (not a general fat man), then I am NOT so sure I could do it, bearing in mind, as you pointed out, that, like you, I wouldn't even have the moral courage to be rude to somebody in the ordinary course of events, even when it might serve a logical purpose. It's partly the jump from the generic to the specific but perhaps that is the same as the jump from Far to Near.
BTW, MatthewB, I think the point is that the man is fat because it takes someone OTHER than yourself to stop the train - self sacrifice is ruled out as an option so the soldier has to also decide between the fat man and the 5 railworkers.
Relevant links for those of us who haven't read everything yet:
Are these sorts of ethical dilemmas ever posed to people serving in the Military?
I often wonder about this, because there seems to be no shortage, in times of real crisis, to find a soldier willing to fling himself under the trolley (without needing to hurl a fat man) if need be.
Of course, this is something that is ingrained in the soldier's behavior during training, and it, at times, doesn't take.
Then, there are times when the military becomes filled with people who are willing to throw anyone onto that railroad track, as long as the math turns out correctly (more people saved than killed). It is an officer's job to do just that (and then order a soldier to do the actual throwing).
I guess this just shows that people are capable of having their normal morality either amplified or nullified depending upon the context. In some cases, there may be no morality to speak of in some classes of soldiers.
However, you do bring up an excellent point. Immediacy of the situation/decision. It is relatively easy for people to make dispassionate and rational decisions when they are not in the heat of the moment or more removed from the actual situation. I have found that it is a rare individual who can actually make a rational decision in the heat of the moment without considerable training (and not just mental training. It takes real simulation of the act for most people to learn just how they will react in such situations).
I have found that it is a rare individual who can actually make a rational decision in the heat of the moment without considerable training
It is said in the military that under stress, you will not rise to the occasion, but sink to the level of your training.
That is a pretty well known maxim, which is why they try so hard to raise the level to which soldiers (and especially officers) are trained.
It is a very rare event to have a soldier rise to the occasion, but they do so every now and then.
A person in the audience suggested taking firefighters, who sometimes face dilemmas very like this (Do I try to save life-threatened person A or seriosly injured Baby B), and hooking them up to scans and seeing if their brains work differently - The hypothesis being that they would make decision in dilemmas more 'rationally' and less 'emotionally', as a result of their experience and training. Or the pre-disposition that led to them becoming fire-fighters in the first place.
Of course, just like with Military Training, the Firefighters may have biases about what they consider to be rational.
For instance, most would probably save the injured baby at the expense of an uninjured adult or child. Yet, the baby has less immediate worth than the adult or small child, as these latter two are conscious and self-aware in a way that the baby is not.
Yet, almost instinctively, humans tend to go for the baby. Of course, genetics has wired us to be that way.
That's true (that they have biases) although I understand the training is attend to the nature of the injury, and practicalities of the situation - eg danger to the firefighter - rather than the age of the victim.
However what one might expect to see in firefighters would be ethical dilemmas like the trolley problem to trigger the cerebral cortex more, and the amaglydia less than in other people.
Perhaps.
Unless of course the training works by manipulating the emotional response. So firefighters are just as emotional, but their emotions have been changed by their training.
This is the sort of problem Kahane was talking about when he said it is very difficult to interpret brain scans.
It worries me that we do not have more emphasis placed upon mustering out our Armed Forces members to undo some of the training that they receive, simply because their emotional biases have been so changed that it makes it difficult for many of them to re-integrate into society.
I think that we are developing a similar problem with Police, who are used to interacting primarily with the worst parts of society, and then developing a bias that the rest of society has similar behavioral trends as that lowest common denominator they are used to seeing.
I will have to re-read the Kahane comments about interpreting brain scans...
Interesting. I doubt these dilemmas are ever actually posed short of officers studying game theory, but the pre-determined response is certainly trained into soldier-like individuals. We could use a phrase to describe all trained legal killers. Is there a less heated phrase or word than legal killers that encompasses military, police, bodyguards, etc.? Basically anyone trained and legally allowed to use lethal force (legal homicide) against another individual.
There is a good long-form journalism piece about people going through Secret Service training; it is ingrained into them over and over again that their job is to take a bullet for the person they are protecting. They are literally trained to be meat shields -- and the training works extremely well. http://www.washingtonpost.com/wp-dyn/content/article/2009/07/17/AR2009071701785.html
In some cases, there may be no morality to speak of in some classes of soldiers.
Is there a distinction between legal killers that have no morality and those that have evil morality?
Was this intended to be published? It seems unedited. Save in "Drafts" instead of "Less Wrong" to unpublish.
:-( no, not a draft! It was just supposed to be light-hearted - fun even - and to make a small point along the way.... it's shame if lesswrong article must be earnest and deep.
I think the thing that made it seem like a draft is the missing "I went" at the beginning of the article. I also noticed illustrate is misspelled, at a quick glance.
The opening was deliberate - it's a common way that newspaper Diarists start their entries.... but perhaps it's a common way that British newspaper diarists start their entries, and sounds wrong to american ears. So I have changed it. Nations divided by a common language etc.
My Conclusions It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode. and so I will be more suspicious of the hypothetical thought experiments from now on.
When one watches the movie series called "Saw", they will experience the "near mode" of thinking much more than the examples given in this thread. "Saw" is about people trapped in various situations, enforced by mechanical means only (no psychotic person to beg for mercy, the same way you can't beg the train to stop), where they must choose which things to sacrifice to save a larger number of lives, sometimes including their own life. For example, the first "Saw" movie starts with 2 dieing people trapped in an abandoned basement, with their legs chained to the wall, and the only way the first person can escape is to cut off their foot with the saw. Many times in the movie series, the group of trapped people chose whose turn it was to go into the next dangerous area to get the key to the next room. Similarly, the psychotic person who puts the people in those situations thinks he is doing it for their own good because he chooses people who have little respect for their own life and through the process of escaping his horrible traps some of them have a better state of mind after escaping than before. I'm not saying that would really work, but that's the main subject of the movies and is shown in many ways simultaneously. These are good examples of how to avoid "meta thinking" and really think in "near mode": Watch the "Saw" movies.
I went to the Royal Institute last week to hear the laconic and dismissive Dr Guy Kahane on whether we are 'Biologically Moral'
[His message: Neurological evidence suggests - somewhat alarmingly - that our moral and ethical decisions may be no more than post-hoc rationalisations of purely emotional, instinctive reactions. However, we should not panic because this is early days in neuroscience, and the correct interpretation of brain-scans is uncertain: scientist find the pattern, and the explanation, they expect to find]
To illustrate his talk Kahane used one of those moral dilemmas which are rarely encountered in real life but which are fascinating to philosophers: the familiar Trolley Problem
A picture is worth a 1,000 words and rather then spell it out Kahane flashed up two nice cartoons:
- the first shows the anxious philosophical protagonist at the railway junction, runaway trolley approaching, pondering the lever that moves the points
- the second shows our hapless philosopher now on a bridge poised behind an unsuspecting fat stranger who is neither alert enough for his attention to have been caught by the runaway trolley bearing down on the small party of railway workers, nor sufficiently familiar with the philosophical domain to appreciate the mortal danger that he is in himself. Irony, oh Irony: Philosophy, thy name is Drama.
So far so humdrum; but... surprise! : in the front row of the audience that evening, wedged into the seat next to me, and the seat next to that, only a couple of metres from Dr Kahane and dressed in identical clothes as when the cartoon was made was the fat stranger himself.
You had to feel for Dr Kahane, but with no evident embarrassment he declined to acknowledge the unexpected attendee and ploughed gamely on with an earnest discussion of the morals, ethics and practicalities of heaving the poor man off a bridge and under the wheels of an oncoming philosophical trope. I had to smile
And the Trolley Problem is always good for a lively debate so, inevitably, in the Q&A, we came back to it all over again, I felt more uncomfortable by now as members of the audience discussed at length the pros and cons of the fat man's sorry and undeserved demise, none of them acknowledging his presence amongst us.
The fat man was, you might say, the elephant in the room.
My discovery
The reason the Trolley Problem is so intriguing and enduring is that it appears to neatly demonstrate Far and Near thinking : Although the two scenarios are logically identical, shifting a lever to divert a trolley down a track is Far, so most people will do it, but giving a fat man a shove is Near and this is why most people will decline.
What I discovered that evening is: that the bridge scenario is not, in fact, Near at all.
Having the fat stranger sitting right next to you is Near !
You see, as an avowed nationalist and utilitarian, I have never before found the Trolley Problem to be any kind of dilemma: I am a slayer of the obese onlooker every time. But that particular evening when it came to the vote I found that simple embarrassment, and the trivial desire not to appear insensitive was enough to stay my rational hand and I sat on it, guiltily despatching five poor, hypothetical railway workers to their deaths, merely to avoid a momentary unkindness.
My Conclusions
- It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode.
- and so I will be more suspicious of the hypothetical thought experiments from now on.
Epilogue
But, you are asking, how did the Fat Man himself vote?
He declined to push, remarking drily that the fattest person on the bridge was very likely to be himself. He would, he said, jump.
He was most definitely thinking in Far mode.