When a collision is unavoidable, should a self-driving car try to maximize the survival chances of its occupants, or of all people involved?

http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 10:46 PM

I think a less contrived question is "Should a self driving car minimize travel time for you or all people on the road?"

A personal or privately operated self-driving car should probably minimize the passenger travel time as this probably best aligns with the customer's and, in a reasonably competitive market, the manufacturer's interests.
The crash case is more complicated because there are ethical and legal liability issues.

I think there is a confusion going on there. Should reflect to what is ethical, what would be the best option, and I don't see how the manufacturer’s interests really matter for that. Self-driving cars should cooperate with each other in various prisoner's dilemma, not defect with each other, and more generally, they should behave in a way to smooth traffic globally (which at the end of the year would lead to less traffic time for everyone if all cars do so), not behave selfishly and minimize the passenger's travel time.

Now, in a competitive market, due to manufacturer's interests, it is indeed unlikely they would do so. But that is different from should. That's a case of pure market leading to a suboptimal solution (as often with Nash equilibrium), but there might be ways to fix it, either from manufacturers negotiating with each others outside the market channel to implement more globally efficient algorithms (like many standard bodies do), or through the state imposing it to them (like EU imposing the same charger for all cell phones).

Of course there are drawbacks and potential pitfall with all those solutions, but that's a different matter than the should issue.

What if a company makes a large number of cars? Would they make cars that minimize travel time for their occupants, or for all people who buy cars from that company? Would multiple companies band together, and make cars that minimize travel time for everyone who buys cars from any of those companies?

They'd all band together and create a regulatory agency which ensures everyone's doing that. This is what happens in other industries, and this is what happens in car manufacturing.

This is sort of fun to think about, but I don't think the actual software will look anything like a trolley-problem solver. Given a choice between swerving and hitting A, or hitting B, I predict its actual answer will be "never swerve" and all the interesting details about what A and B are will be ignored. And that will be fine, because cars almost never get forced into positions where they can choose what to crash into but can't avoid crashing entirely, especially not when they have superhuman reflexes, and manufacturers can justify it by saying that braking works slightly better when not swerving at the same time.

Yeah.

There's an actual trolley problem - a very trivial one - hidden in it as well, though. Do you put your engineering resources into resolving swerve vs not swerve, or do you put that into better avoiding those situations altogether?

Of course, the answer is the latter.

This is also the issue with classical trolley problems. In the trolley problem as stated, subject's brainfart is resulting in an extra death. Of course a fat man won't stop a trolley! (It's pretty easy to state such problems better, but you won't generate much discussion that way)

More importantly, if it thinks it has a choice between hitting A and B, it's likely a bug, and it's better off not swerving.

[-][anonymous]10y30

I'm not sure if this is going to come up in the way proposed in the article. Given a potential collision, before even calculating whether or not it is unavoidable or not, the car is likely going to start reducing speed by using the brakes, because generally that's what you need to do in almost all collisions. (The very high percent that aren't of this type.)

But once the car has jammed on the brakes, it has cut off a great deal of its ability to do any swerves. These types of cases may be so rare that giving the car the fractional second to make those calculations may lead to more deaths than just hitting the brakes sooner in all cases would.

From a utilitarian ethics point of view, I suspect the design decision may be something like "We will save 10X lives per billion vehicle miles if the car precommits to always reduce speed without thinking about it, even if we would save X more lives by thinking about how to pick swerve in certain cases... but we can't do that without NOT saving the 10X lives from immediate precommitment."

Although, once we actually have more data on self driving car crashes, I would not be surprised if I have to rethink some of the above.

They should minimize damage to their own occupants, but using some kind of superrational decision theory so they won't defect in prisoner's dilemmas against each other. I suspect that in sufficiently symmetric situations the result is the same as minimizing damage to everybody using causal decision theory.

This reminds me of the response to the surgeon's dilemma about trust in hospitals. I want to say occupants, because if fear of being sacrificed in trolley problems causes fewer people adopt safer non distractable non fatiguable robot cars then it seems like a net utilitarian loss. If that were not the case, like for example if the safety advantage became overwhelming enough that people bought them anyway, then probably it should just minimize deaths. (I only thought about this for a couple of minutes though)

Suspected Nash-equilibrium ethics for the proprietary collision avoidance algorithm:

Utilitarian: minimize negative publicity for the car maker.

Resulting Asimov-like deontology:
1) Avoid collisions with the same make cars
2) Maximize survival of the vehicle's occupants, disregarding safety of the other vehicle involved, subject to 1)
3) Minimize damage to the vehicle, subject to 1) and 2)

Utilitarian: minimize negative publicity for the car maker.

The US is a litigious society. I suspect that minimizing damage from wrongful-death lawsuits will be more important than minimizing negative publicity.

In fact, I don't think self-driving cars can become widespread until the "in any accident, sue the deep-pocketed manufacturer" problem gets resolved, likely by an act of Congress limiting the liability.

Well, maybe not.

Maybe yes. The expression "litigious society" implies comparison with other societies, presumably less litigious, and the article you quoted is entirely silent on the topic, spending most of its words on rehashing the notorious McDonalds coffee case. And it does conclude with saying that the fear of litigation in the US is pervasive and often reaches ridiculous levels.

... Noone would ever design a car with any priority other than "minimize impact velocity" Because that is a parameter it can actually try to minimize. In the extremely unlikely case of a car smart enough to parse the question you just posed, impacts would never ever happen. Barring outright malice.

The car doesn't parse the question. The programmer does. You design a car that will avoid impacts when possible. Then you tell it what to do if impact is unavoidable. It might slam on the brakes while following the road. It might look for an option with a low impact velocity. It might prioritize hitting cars over hitting pedestrians. Etc.

I wonder if they're actually using a utility function as in [probability * utility] or just going with [aim for safe car > unsafe car] unilaterally regardless of the likelihood of crashing into either. E.g., treating a 1% chance of crashing into the safe car and a 80% chance of crashing into the unsafe car as equal to 99% chance of crashing into the safe car and a .05% chance of crashing into the unsafe car; choosing in both cases to crash into the safe car.

The article is speculation about moral (and legal) issues of plausibly near-future technology, current self-driving cars are experimental vehicles not designed to safely operate autonomously under emergency situations.

Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.

However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.

The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as "targets" by self-driving cars to crash into, making them statistically less safe.

I don't think the concerns are large enough to worry about, but hypothetically it's an interesting dilemma.

When I was a dumb kid, my friends and I regularly jaywalked (jayran?) across 3 lanes at a time of high speed traffic, just to get to a nicer place for lunch. Don't underestimate the populations of stupid and selfish people in the world, or the propensity to change behavior in response to changing incentives.

On the other hand, I'm not sure how the incentives here will change. Any self-driving car is going to be speckled with cameras, and "I know it will slam on the brakes or swerve to avoid me" might not be much temptation when followed with "then it will send my picture to the police".

Aaaaand now you brought privacy controversy into the mix.

In a completely reasonable way. If your driving strategy involves making problems for other people, that's intrinsically a non-private activity.

acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash.

Ah, an interesting possibility. Self-driving cars can be gamed. If I know a car will always swerve to avoid me, I can manipulate it.

I doubt if self-driving cars would have to choose between crashing into two vehicles often enough for these considerations to show up in statistics.

Conventional mortality would dictate that the car minimize global loss of life

I don't know about that. "Conventional morality" is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.

What kind of things override loss of life and and can be widely agreed upon?

What kind of things override loss of life and and can be widely agreed upon?

Going to war, for example.

Or consider involuntary organ harvesting.

In the self-driving car example, say "getting to your destination". Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.

The lives each car would have the most effect on would be those inside of it.

I disagree. The driver of a car is much less in danger than a pedestrian.

No one pedestrian is more likely to die as a result of an accident involving a particular car than the owner of that car, though, which I think is what Cube meant.

True, but that doesn't change the fact that if you're at risk of crashing into a pedestrian, your car will act to save the pedestrian, rather than you.

It should act in favor of its passengers of course.

Why 'of course'? This doesn't seem obvious to me.

Probably because almost every other safety decision in a cars design is focused on the occupants.

those reinforced bars protecting the passenger: Do you think they care if they mean that any car hitting the side of the car suffers more damage due to hitting a more solid structure?

They want to sell the cars, thus they likely want the cars priorities to be somewhat in line with the buyer. They buyer doesn't care all much about the toddler in the other car except in a philosophical sense. They care about the toddler in their own car. The person is not the priority of the seller or the buyer.

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Makes sense. Though the design of a motion control algorithm where the inverse dynamics model interacts with a road law expect system to make decisions in a fraction of a second would be... interesting.

HungryHobo gave good arguments from tradition and liability; here's an argument from utility:

Google's cars are up over a million autonomously-driven km without an accident. That's not proof that they're safer than the average human-driven car (something like 2 accidents per million km in the US?) but it's mounting evidence. If car AI written to prioritize its passengers turns out to still be an order of magnitude safer for third parties than human drivers, then the direct benefit of optimizing for total safety may be outweighed by the indirect benefit of optimizing for own-passenger safety and thereby enticing more rapid adoption of the technology.

They'd be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.

This is definitely a case for superrationality. If antagonists in an accident are equipped, communicate. Not sure what to do about human participants, though.

This issue brought up seems to greatly overestimate the probability of crashing into something. IIRC, the main reason people crash is because 1) they oversteer and 2) they steer to where they're looking, and they often look in the direction of the nearest or most inevitable obstacle.

These situations would involve human error almost every time, and crashing would be most likely due to the human driver crashing into the autocar, not the other way around. Something that would increase the probability would be human error in heavy traffic.

It seems there are few distinct cases

  • I am someone who does not wear helmet in our current society where this is illegal and people don't exactly discriminate in case of car accidents, so the introduction of smart cars will only confirm my current (bad) decision - no change there.

  • I currently wear a helmet, but would stop wearing one if smart cars were introduced.
    Assuming every car magically became a smart car, that means I am willing to suffer a fine in exchange for a slightly greater likelihood of surviving a nearby car crash.
    Considering smart cars are better drivers than humans, and that car crashes are already rare, that means if I considered the fine adequate to incentivize me into wearing a helmet previously I should consider them adequate now.
    There is an edge case here : smart cars are better drivers, but only by a small fraction that is offset by their tendency to aim away from me.

  • I currently wear a helmet, and will continue to do so.

Only the edge case would create a morally ambiguous situation, but that seems pretty unlikely (you'd hope that a swarm of cars with superhuman reaction speed would be more than marginally better at preventing accidents).

Hello, trolley problem :-)

The car may face a trolley problem, but designing the algorithm isn't one.

Designing the algorithm necessitates providing a (note: a) solution to the trolley problem.

The car, not being an AI, doesn't actually face any problems.