[LINK] Utilitarian self-driving cars?

7 Post author: V_V 14 May 2014 01:00PM

When a collision is unavoidable, should a self-driving car try to maximize the survival chances of its occupants, or of all people involved?

http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

Comments (44)

Comment author: Coscott 14 May 2014 08:41:57PM 7 points [-]

I think a less contrived question is "Should a self driving car minimize travel time for you or all people on the road?"

Comment author: V_V 14 May 2014 09:12:59PM *  2 points [-]

A personal or privately operated self-driving car should probably minimize the passenger travel time as this probably best aligns with the customer's and, in a reasonably competitive market, the manufacturer's interests.
The crash case is more complicated because there are ethical and legal liability issues.

Comment author: kilobug 15 May 2014 08:17:36AM 7 points [-]

I think there is a confusion going on there. Should reflect to what is ethical, what would be the best option, and I don't see how the manufacturer’s interests really matter for that. Self-driving cars should cooperate with each other in various prisoner's dilemma, not defect with each other, and more generally, they should behave in a way to smooth traffic globally (which at the end of the year would lead to less traffic time for everyone if all cars do so), not behave selfishly and minimize the passenger's travel time.

Now, in a competitive market, due to manufacturer's interests, it is indeed unlikely they would do so. But that is different from should. That's a case of pure market leading to a suboptimal solution (as often with Nash equilibrium), but there might be ways to fix it, either from manufacturers negotiating with each others outside the market channel to implement more globally efficient algorithms (like many standard bodies do), or through the state imposing it to them (like EU imposing the same charger for all cell phones).

Of course there are drawbacks and potential pitfall with all those solutions, but that's a different matter than the should issue.

Comment author: DanielLC 15 May 2014 09:38:45PM 0 points [-]

What if a company makes a large number of cars? Would they make cars that minimize travel time for their occupants, or for all people who buy cars from that company? Would multiple companies band together, and make cars that minimize travel time for everyone who buys cars from any of those companies?

Comment author: private_messaging 16 May 2014 07:08:19AM *  0 points [-]

They'd all band together and create a regulatory agency which ensures everyone's doing that. This is what happens in other industries, and this is what happens in car manufacturing.

Comment author: jimrandomh 14 May 2014 08:16:24PM *  6 points [-]

This is sort of fun to think about, but I don't think the actual software will look anything like a trolley-problem solver. Given a choice between swerving and hitting A, or hitting B, I predict its actual answer will be "never swerve" and all the interesting details about what A and B are will be ignored. And that will be fine, because cars almost never get forced into positions where they can choose what to crash into but can't avoid crashing entirely, especially not when they have superhuman reflexes, and manufacturers can justify it by saying that braking works slightly better when not swerving at the same time.

Comment author: private_messaging 16 May 2014 06:59:07AM *  1 point [-]

Yeah.

There's an actual trolley problem - a very trivial one - hidden in it as well, though. Do you put your engineering resources into resolving swerve vs not swerve, or do you put that into better avoiding those situations altogether?

Of course, the answer is the latter.

This is also the issue with classical trolley problems. In the trolley problem as stated, subject's brainfart is resulting in an extra death. Of course a fat man won't stop a trolley! (It's pretty easy to state such problems better, but you won't generate much discussion that way)

Comment author: DanielLC 15 May 2014 09:37:20PM 0 points [-]

More importantly, if it thinks it has a choice between hitting A and B, it's likely a bug, and it's better off not swerving.

Comment author: [deleted] 15 May 2014 06:10:12AM *  2 points [-]

They should minimize damage to their own occupants, but using some kind of superrational decision theory so they won't defect in prisoner's dilemmas against each other. I suspect that in sufficiently symmetric situations the result is the same as minimizing damage to everybody using causal decision theory.

Comment author: 2ZctE 15 May 2014 04:47:11AM *  2 points [-]

This reminds me of the response to the surgeon's dilemma about trust in hospitals. I want to say occupants, because if fear of being sacrificed in trolley problems causes fewer people adopt safer non distractable non fatiguable robot cars then it seems like a net utilitarian loss. If that were not the case, like for example if the safety advantage became overwhelming enough that people bought them anyway, then probably it should just minimize deaths. (I only thought about this for a couple of minutes though)

Comment author: shminux 14 May 2014 04:53:12PM *  1 point [-]

Suspected Nash-equilibrium ethics for the proprietary collision avoidance algorithm:

Utilitarian: minimize negative publicity for the car maker.

Resulting Asimov-like deontology:
1) Avoid collisions with the same make cars
2) Maximize survival of the vehicle's occupants, disregarding safety of the other vehicle involved, subject to 1)
3) Minimize damage to the vehicle, subject to 1) and 2)

Comment author: Lumifer 14 May 2014 04:59:53PM 2 points [-]

Utilitarian: minimize negative publicity for the car maker.

The US is a litigious society. I suspect that minimizing damage from wrongful-death lawsuits will be more important than minimizing negative publicity.

In fact, I don't think self-driving cars can become widespread until the "in any accident, sue the deep-pocketed manufacturer" problem gets resolved, likely by an act of Congress limiting the liability.

Comment author: Baughn 16 May 2014 02:28:11PM -1 points [-]
Comment author: Lumifer 16 May 2014 02:51:16PM *  2 points [-]

Well, maybe not.

Maybe yes. The expression "litigious society" implies comparison with other societies, presumably less litigious, and the article you quoted is entirely silent on the topic, spending most of its words on rehashing the notorious McDonalds coffee case. And it does conclude with saying that the fear of litigation in the US is pervasive and often reaches ridiculous levels.

Comment author: Lumifer 14 May 2014 03:02:12PM 1 point [-]

Hello, trolley problem :-)

Comment author: Oscar_Cunningham 14 May 2014 04:29:12PM 0 points [-]

The car may face a trolley problem, but designing the algorithm isn't one.

Comment author: Lumifer 14 May 2014 04:52:27PM 4 points [-]

Designing the algorithm necessitates providing a (note: a) solution to the trolley problem.

The car, not being an AI, doesn't actually face any problems.

Comment author: [deleted] 16 May 2014 12:53:27PM 1 point [-]

I'm not sure if this is going to come up in the way proposed in the article. Given a potential collision, before even calculating whether or not it is unavoidable or not, the car is likely going to start reducing speed by using the brakes, because generally that's what you need to do in almost all collisions. (The very high percent that aren't of this type.)

But once the car has jammed on the brakes, it has cut off a great deal of its ability to do any swerves. These types of cases may be so rare that giving the car the fractional second to make those calculations may lead to more deaths than just hitting the brakes sooner in all cases would.

From a utilitarian ethics point of view, I suspect the design decision may be something like "We will save 10X lives per billion vehicle miles if the car precommits to always reduce speed without thinking about it, even if we would save X more lives by thinking about how to pick swerve in certain cases... but we can't do that without NOT saving the 10X lives from immediate precommitment."

Although, once we actually have more data on self driving car crashes, I would not be surprised if I have to rethink some of the above.

Comment author: Izeinwinter 15 May 2014 07:34:09PM 1 point [-]

... Noone would ever design a car with any priority other than "minimize impact velocity" Because that is a parameter it can actually try to minimize. In the extremely unlikely case of a car smart enough to parse the question you just posed, impacts would never ever happen. Barring outright malice.

Comment author: DanielLC 15 May 2014 09:43:23PM 2 points [-]

The car doesn't parse the question. The programmer does. You design a car that will avoid impacts when possible. Then you tell it what to do if impact is unavoidable. It might slam on the brakes while following the road. It might look for an option with a low impact velocity. It might prioritize hitting cars over hitting pedestrians. Etc.

Comment author: JQuinton 14 May 2014 09:32:54PM *  1 point [-]

I wonder if they're actually using a utility function as in [probability * utility] or just going with [aim for safe car > unsafe car] unilaterally regardless of the likelihood of crashing into either. E.g., treating a 1% chance of crashing into the safe car and a 80% chance of crashing into the unsafe car as equal to 99% chance of crashing into the safe car and a .05% chance of crashing into the unsafe car; choosing in both cases to crash into the safe car.

Comment author: V_V 14 May 2014 10:11:38PM *  1 point [-]

The article is speculation about moral (and legal) issues of plausibly near-future technology, current self-driving cars are experimental vehicles not designed to safely operate autonomously under emergency situations.

Comment author: Cube 14 May 2014 03:22:37PM 0 points [-]

Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.

However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.

Comment author: Houshalter 14 May 2014 05:19:07PM 3 points [-]

The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as "targets" by self-driving cars to crash into, making them statistically less safe.

I don't think the concerns are large enough to worry about, but hypothetically it's an interesting dilemma.

Comment author: roystgnr 15 May 2014 04:03:30PM 6 points [-]

When I was a dumb kid, my friends and I regularly jaywalked (jayran?) across 3 lanes at a time of high speed traffic, just to get to a nicer place for lunch. Don't underestimate the populations of stupid and selfish people in the world, or the propensity to change behavior in response to changing incentives.

On the other hand, I'm not sure how the incentives here will change. Any self-driving car is going to be speckled with cameras, and "I know it will slam on the brakes or swerve to avoid me" might not be much temptation when followed with "then it will send my picture to the police".

Comment author: Transfuturist 15 May 2014 06:48:08PM 0 points [-]

Aaaaand now you brought privacy controversy into the mix.

Comment author: Luke_A_Somers 16 May 2014 01:42:18AM 1 point [-]

In a completely reasonable way. If your driving strategy involves making problems for other people, that's intrinsically a non-private activity.

Comment author: Lumifer 14 May 2014 06:05:52PM *  5 points [-]

acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash.

Ah, an interesting possibility. Self-driving cars can be gamed. If I know a car will always swerve to avoid me, I can manipulate it.

Comment author: Nornagest 14 May 2014 05:35:44PM *  2 points [-]

I doubt if self-driving cars would have to choose between crashing into two vehicles often enough for these considerations to show up in statistics.

Comment author: Lumifer 14 May 2014 03:42:23PM 2 points [-]

Conventional mortality would dictate that the car minimize global loss of life

I don't know about that. "Conventional morality" is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.

Comment author: Cube 14 May 2014 03:47:37PM -1 points [-]

What kind of things override loss of life and and can be widely agreed upon?

Comment author: Lumifer 14 May 2014 04:00:00PM *  2 points [-]

What kind of things override loss of life and and can be widely agreed upon?

Going to war, for example.

Or consider involuntary organ harvesting.

Comment author: Eugine_Nier 20 May 2014 03:14:26AM 2 points [-]

In the self-driving car example, say "getting to your destination". Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.

Comment author: DanielLC 15 May 2014 09:44:55PM 0 points [-]

The lives each car would have the most effect on would be those inside of it.

I disagree. The driver of a car is much less in danger than a pedestrian.

Comment author: RowanE 24 May 2014 03:23:19PM 1 point [-]

No one pedestrian is more likely to die as a result of an accident involving a particular car than the owner of that car, though, which I think is what Cube meant.

Comment author: DanielLC 25 May 2014 12:02:41AM 0 points [-]

True, but that doesn't change the fact that if you're at risk of crashing into a pedestrian, your car will act to save the pedestrian, rather than you.

Comment author: Lalartu 14 May 2014 01:46:00PM 0 points [-]

It should act in favor of its passengers of course.

Comment author: raisin 14 May 2014 01:50:40PM 6 points [-]

Why 'of course'? This doesn't seem obvious to me.

Comment author: HungryHobo 14 May 2014 05:37:32PM *  3 points [-]

Probably because almost every other safety decision in a cars design is focused on the occupants.

those reinforced bars protecting the passenger: Do you think they care if they mean that any car hitting the side of the car suffers more damage due to hitting a more solid structure?

They want to sell the cars, thus they likely want the cars priorities to be somewhat in line with the buyer. They buyer doesn't care all much about the toddler in the other car except in a philosophical sense. They care about the toddler in their own car. The person is not the priority of the seller or the buyer.

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Comment author: V_V 14 May 2014 09:00:06PM *  0 points [-]

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Makes sense. Though the design of a motion control algorithm where the inverse dynamics model interacts with a road law expect system to make decisions in a fraction of a second would be... interesting.

Comment author: roystgnr 15 May 2014 04:11:31PM 2 points [-]

HungryHobo gave good arguments from tradition and liability; here's an argument from utility:

Google's cars are up over a million autonomously-driven km without an accident. That's not proof that they're safer than the average human-driven car (something like 2 accidents per million km in the US?) but it's mounting evidence. If car AI written to prioritize its passengers turns out to still be an order of magnitude safer for third parties than human drivers, then the direct benefit of optimizing for total safety may be outweighed by the indirect benefit of optimizing for own-passenger safety and thereby enticing more rapid adoption of the technology.

Comment author: ThrustVectoring 14 May 2014 02:28:19PM 2 points [-]

They'd be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.

Comment author: Transfuturist 15 May 2014 06:51:26PM *  0 points [-]

This is definitely a case for superrationality. If antagonists in an accident are equipped, communicate. Not sure what to do about human participants, though.

This issue brought up seems to greatly overestimate the probability of crashing into something. IIRC, the main reason people crash is because 1) they oversteer and 2) they steer to where they're looking, and they often look in the direction of the nearest or most inevitable obstacle.

These situations would involve human error almost every time, and crashing would be most likely due to the human driver crashing into the autocar, not the other way around. Something that would increase the probability would be human error in heavy traffic.

Comment author: Jinoc 15 May 2014 02:27:08PM *  -1 points [-]

It seems there are few distinct cases

  • I am someone who does not wear helmet in our current society where this is illegal and people don't exactly discriminate in case of car accidents, so the introduction of smart cars will only confirm my current (bad) decision - no change there.

  • I currently wear a helmet, but would stop wearing one if smart cars were introduced.
    Assuming every car magically became a smart car, that means I am willing to suffer a fine in exchange for a slightly greater likelihood of surviving a nearby car crash.
    Considering smart cars are better drivers than humans, and that car crashes are already rare, that means if I considered the fine adequate to incentivize me into wearing a helmet previously I should consider them adequate now.
    There is an edge case here : smart cars are better drivers, but only by a small fraction that is offset by their tendency to aim away from me.

  • I currently wear a helmet, and will continue to do so.

Only the edge case would create a morally ambiguous situation, but that seems pretty unlikely (you'd hope that a swarm of cars with superhuman reaction speed would be more than marginally better at preventing accidents).