Sure, you want to make sure the behavior in a no-win situation isn't something horrible. It would be bad if the robot realized that it couldn't avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That's a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation ... and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Previous Open Thread
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.