Sewing-Machine comments on The Trolley Problem: Dodging moral questions - Less Wrong

13 Post author: Desrtopa 05 December 2010 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 05 December 2010 03:30:18PM 2 points [-]

This is still true:

  • Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
  • Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.

If you're a consequentialist, trolley problems are entirely irrelevant.

Comment author: [deleted] 06 December 2010 12:08:39AM 1 point [-]

If you're a consequentialist, trolley problems are easy.

Comment author: wedrifid 06 December 2010 02:27:50AM 2 points [-]

If you're a consequentialist, trolley problems are easy.

Only if you know whether or not someone is watching!

That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.

Comment author: Desrtopa 07 December 2010 05:14:25PM 0 points [-]

This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.

Relevant, perhaps, but if you absolutely can't talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.

Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren't so poorly built for utility calculations we might not even need AGI in the first place.