nerzhin comments on The Problem With Trolley Problems - Less Wrong

11 Post author: lionhearted 23 October 2010 05:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: nerzhin 24 October 2010 09:20:32PM 3 points [-]

If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself

This implies that if you are designing an AI that is expected to encounter trolley-like problems, it should precommit to eating lots of ice cream.

Comment author: PeerInfinity 25 October 2010 02:41:47AM 2 points [-]

ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.

Comment author: shokwave 25 October 2010 10:45:26AM 0 points [-]

All this shows is that it's possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment?

I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options - almost the beginnings of a proof of incompleteness of moral theory.

Of course, it doesn't show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete - but then, something tells me that given a moral theory, it wouldn't be hard to describe a trolley problem that is unsolvable in that theory.