SilasBarta comments on The Unselfish Trolley Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (131)
Your modification of the problem to make "push the guy" the obvious answer still doesn't answer my objections to doing so, which are quite robust against modifications of the scenario that preserve "the sense" of the problem:
In short, problems that divorce the scenario from its social context, in trying to "purify" the question, do in fact throw away morally-relevant information, which people (perhaps indirectly) incorporate into their decision-making.
Let me phrase the problem with such scenarios by posing another "moral" dilemma that makes the same attempt to delete relevant information in poor attempt to find the pure answer:
"Dilemma: You are driving on a road. Should you drive on the right, drive on the left, or center your car? What is the moral way to drive?"
You can imagine how that might go:
Me: Well, that would depend on what system currently exists for using the road, and, if none, if the area is inhabited ...
Them: NO. Forget about all that. Pick a road side and defend your answer.
Me: But there is no correct side of the road apart from the social context ...
Them: Great, another one of these guys. Okay, pretend there's a terrorist who will kill five people if you don't drive on the left, and your driving otherwise has no effect on anyone's safety.
Me: Well, sure, in that case, you should drive on the left, but now you're talking about a fundamentally different ...
Them: Aha! So you are a lefter!
OTOH, if you have a modification to the trolley case that preserves "the sense" of it while obviating my objection, I'm interested in hearing it.
Everyone has been knocked unconscious by Snidely Whiplash except you, you can reach the switch, and you have to decide which track gets run over. Nobody will know what you did, or even that you did anything, except you. The news stories won't say "Fat man thrown on tracks to save lives!", it'll just say that a trolley ran over some people in an act of cartoon villainy.
That's the case where both groups are on a track, not the case where I could push a safely-positioned non-tracker onto the track. And in that case I don't generally object to changing which track the trolley is on anyway.
In any case, this aspect would again fundamentally change the problem, while still not changing the logic I gave above:
This (if applied to the fat man case I actually object to) is basically saying that I can rewrite physics to the point where even being on a bridge above a train does not protect you from being hit by it. Thus, everything I said before, about it becoming harder to assess and trade off against risk, would apply, and making the change would be inefficient for the same reasons. (i.e. I would prefer a world in which risks are easier to assess, not one in which you have to be miles from any dangerous thing just to be safe)
In the two-track setup, only one of the tracks is going to get killed, even if you do nothing. Switching the train to a previously-safe track with someone on it is morally identical to throwing someone safe onto a single track, IMO.
That's an interesting opinion to hold. Would you care to go over the reasons I've given to find them different?
For clarity: from this post, I understood your objection to be primarily rooted in second-order effects. Your claim seems to be that you are not simply saving these people and killing those people by your actions, you are also destroying understanding of how the world works, wrecking incentive structures, and so on. If my understanding on this point is incorrect, please clarify.
Assuming the above is correct, my modification seems to deal with those objections cleanly. If you are the only one who knows what happened, then people aren't going to get the information that some crazy bastard threw some dude at a trolley, they're just going to go on continuing to assume that sort of thing only happens in debates between philosophy geeks. It is never known to have happened, therefore the second-order effects from people's reactions to it having happened never come up, and you can look at the problem purely with regard to first-order effects alone.
Replacing "like that guy did a few months ago" in my comment with something agentless and Silas-free such as "like seems to happen these days" doesn't, AFAICT, change the relevance of my objection: people are still less able to manage risk, and a Pareto disimprovement has happened in that people have to spend more to get the same-utility risk/reward combo. So your change does not obviate my distinction and objection.
But it has to be a real known problem in order for people's actions to change. Given that a pure trolley problem hasn't yet happened in reality, keeping it secret if it did happen should be plenty sufficient to prevent societal harm from the reactions.
But if I say that it's a good idea here, I'm saying it's a good idea in any comparable case, and so it should be a discernible (and Pareto-inefficient) phenomenon.
But if you limit "comparable cases" to situations where you can do it in secret, that's not a problem.