drc500free comments on The Trolley Problem: Dodging moral questions - Less Wrong

13 Post author: Desrtopa 05 December 2010 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread.

Comment author: drc500free 09 December 2010 07:30:04PM *  5 points [-]

Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.

So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren't all-knowing.

This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I'm in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The "least convenient universe" story is tantalizing, but I think it leads astray here.