nerzhin comments on The Problem With Trolley Problems - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don't push the other guy.
But if you're not fat enough, then yes, of course you push the fat guy onto the tracks, without hesitation. And you plead guilty to his murder. And you go to jail. One person dying and one person going to jail is preferable to 5 people dying. Or, if you're so afraid of going to jail that you would rather die, then you can also jump onto the tracks after pushing the fat man, to be extra sure of being able to stop the trolley.
Technically, you should jump onto the tracks if by doing so you think that you can increase the probability of saving the 5 people by more than 20 percent.
Here is an interesting blog post on the topic of third alternatives to philosophical puzzles like these: http://tailsteak.com/archive.php?num=497
I still consider myself a "Classical Utilitarian", by the way, even though I am aware of some of the disturbing consequences of this belief system.
And I agree with the main point of your post, and upvoted it. But the real purpose of trolley problems is to explore edge-cases of moral systems, not to advocate or justify real life policy.
This implies that if you are designing an AI that is expected to encounter trolley-like problems, it should precommit to eating lots of ice cream.
ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.
All this shows is that it's possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment?
I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options - almost the beginnings of a proof of incompleteness of moral theory.
Of course, it doesn't show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete - but then, something tells me that given a moral theory, it wouldn't be hard to describe a trolley problem that is unsolvable in that theory.