You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

fubarobfusco comments on Open thread, Dec. 22 - Dec. 28, 2014 - Less Wrong Discussion

5 Post author: Gondolinian 22 December 2014 02:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 28 December 2014 07:58:59PM *  1 point [-]

I don't think designing a car with the idea that it will never get into accidents is a great idea.

I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)

But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.

Part of my point above is that humans can't even agree with one another what the right thing to do in certain moral crises is. That's why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.