And the answer that Google seems to have adopted is, "It should see, think, and drive well enough that it never gets into that situation."
I don't think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
I don't think designing a car with the idea that it will never get into accidents is a great idea.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can't even agree with one a...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Previous Open Thread
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.