Self-driving cars are already deciding who to kill
car rain
car rain

(Georgii Shipin / Shutterstock)

Autonomous vehicles are already making profound choices about whose lives matter, according to experts, so we might want to pay attention.

"Every time the car makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties," Iyad Rahwan, an MIT cognitive scientist, wrote in an email.

The most well-known issues in AV ethics are trolly problems—moral questions dating back to the era of trollies that ask whose lives should be sacrificed in an unavoidable crash. For instance, if a person falls onto the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian, what should it do?

Rahwan and colleagues have studied what humans consider the moral action in no-win scenarios (you can judge your own cases at their crowd-sourced project, Moral Machine).

trolly problem basic
trolly problem basic

(What should the self-driving car do?moralmachine.mit.edu)

While human-sacrifice scenarios are only hypothetical for now, Rahwan and others say they would inevitably come up in a world full of AVs.

Then there are the ethical questions that come up every day. For instance, how should AVs behave when passing a biker or pedestrian?

"When you drive down the street, you’re putting everyone around you at risk," Ryan Jenkins, a philosophy professor at Cal Poly, told us. "[W]hen we’re driving driving past a bicyclist, when we’re driving past a jogger, we like to give them an extra bit of space because we think it safer; even if we’re very confident that we’re not about to crash, we also realize that unexpected things can happen and cause us to swerve, or the biker might fall off their bike, or the jogger might slip and fall into the street."

And there’s no easy answer to these questions.

"To truly guarantee a pedestrian’s safety, an AV would have to slow to a crawl any time a pedestrian is walking nearby on a sidewalk, in case the pedestrian decided to throw themselves in front of the vehicle," Noah Goodall, a scientist with the Virginia Transportation Research Council, wrote by email.

Human drivers can answer ethical questions big and small using intuition, but it’s not that simple for artificial intelligence. AV programmers must either define explicit rules for each of these situations or rely on general driving rules and hope things work out.

"On one hand, the algorithms that control the car may have an explicit set of rules to make moral tradeoffs," Rahwan wrote. "On the other hand, the decision made by a car in the case of unavoidable harm may emerge from the interaction of various software components, none of which has explicit programming to handle moral tradeoffs."