You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lalartu comments on [LINK] Utilitarian self-driving cars? - Less Wrong Discussion

7 Post author: V_V 14 May 2014 01:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Lalartu 14 May 2014 01:46:00PM 0 points [-]

It should act in favor of its passengers of course.

Comment author: raisin 14 May 2014 01:50:40PM 6 points [-]

Why 'of course'? This doesn't seem obvious to me.

Comment author: HungryHobo 14 May 2014 05:37:32PM *  3 points [-]

Probably because almost every other safety decision in a cars design is focused on the occupants.

those reinforced bars protecting the passenger: Do you think they care if they mean that any car hitting the side of the car suffers more damage due to hitting a more solid structure?

They want to sell the cars, thus they likely want the cars priorities to be somewhat in line with the buyer. They buyer doesn't care all much about the toddler in the other car except in a philosophical sense. They care about the toddler in their own car. The person is not the priority of the seller or the buyer.

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Comment author: V_V 14 May 2014 09:00:06PM *  0 points [-]

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Makes sense. Though the design of a motion control algorithm where the inverse dynamics model interacts with a road law expect system to make decisions in a fraction of a second would be... interesting.

Comment author: roystgnr 15 May 2014 04:11:31PM 2 points [-]

HungryHobo gave good arguments from tradition and liability; here's an argument from utility:

Google's cars are up over a million autonomously-driven km without an accident. That's not proof that they're safer than the average human-driven car (something like 2 accidents per million km in the US?) but it's mounting evidence. If car AI written to prioritize its passengers turns out to still be an order of magnitude safer for third parties than human drivers, then the direct benefit of optimizing for total safety may be outweighed by the indirect benefit of optimizing for own-passenger safety and thereby enticing more rapid adoption of the technology.

Comment author: ThrustVectoring 14 May 2014 02:28:19PM 2 points [-]

They'd be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.

Comment author: Transfuturist 15 May 2014 06:51:26PM *  0 points [-]

This is definitely a case for superrationality. If antagonists in an accident are equipped, communicate. Not sure what to do about human participants, though.

This issue brought up seems to greatly overestimate the probability of crashing into something. IIRC, the main reason people crash is because 1) they oversteer and 2) they steer to where they're looking, and they often look in the direction of the nearest or most inevitable obstacle.

These situations would involve human error almost every time, and crashing would be most likely due to the human driver crashing into the autocar, not the other way around. Something that would increase the probability would be human error in heavy traffic.