You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MattG comments on The Trolley Problem and Reversibility - Less Wrong Discussion

7 Post author: casebash 30 September 2015 04:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 01 October 2015 01:34:33AM 0 points [-]

In the least convenient possible world, you happen upon these people and don't know anything about them, their past, or their reasons.

Comment author: Jiro 02 October 2015 04:06:20PM *  0 points [-]

If you don't know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.

Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people's actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.

And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don't switch the trolley), but that's how precommitments work--having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.

Comment author: [deleted] 03 October 2015 03:04:29AM 0 points [-]

Don't know if this is "least convenient world" or "most convenient world" territory, but I think it fits in the spirit of the problem:

No one will know that a switch was pulled except you.

Comment author: Jiro 03 October 2015 06:03:23AM *  0 points [-]

That doesn't work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.

Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You'd have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)