You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

fubarobfusco comments on Open thread, Dec. 22 - Dec. 28, 2014 - Less Wrong Discussion

5 Post author: Gondolinian 22 December 2014 02:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 28 December 2014 04:54:47AM 1 point [-]

Sure, you want to make sure the behavior in a no-win situation isn't something horrible. It would be bad if the robot realized that it couldn't avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That's a thing to test for.

But consider the level of traffic fatalities we have today.

How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation ... and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?

I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.