Unlock stock picks and a broker-level newsfeed that powers Wall Street.

How the government plans to make your self-driving car safer

A self-driving car may someday have to decide between your life and the lives of others. But how should the car choose? If you don’t know how to make that decision, that’s okay — Washington doesn’t either.

That’s one big takeaway in a new, lengthy document from the Department of Transportation that lays out options to make autonomous vehicles safer–and represents the most public sign of the attention self-driving cars are getting from politicians despite their inability to vote.

Over just the past three months, a Tesla driver died when his car’s autopilot software failed to detect a turning tractor-trailer, Ford (F) began showing off its own autonomous (and exceedingly polite) vehicles, Lyft founder John Zimmer predicted that the majority of that ride-hailing service’s trips would involve self-driving cars by 2021, and Uber launched a trial of self-driving cars in Pittsburgh—in which human drivers remain seated upfront, just in case.

It’s enough to make Google, once the most public advocate of driverless cars, look like it’s falling behind.

The rapid progress has also left government policy makers and auto-industry lawyers with their own catching up to do.

DOT on the spot

On Tuesday, the Obama administration set out its plan to bring national oversight to self-driving cars that, as President Obama argued in a Pittsburgh Post-Gazette op-ed, bring such benefits as “safer, more accessible driving” and “less congested, less polluted roads.”

Remember, we human drivers aren’t as good as we think. US motor-vehicle crashes killed 35,902 people in 2015, and driver choice or error caused 94% of those accidents.

The Department of Transportation’s proposed framework, as outlined in a 116-page National Highway Traffic Safety Administration document, stresses guidance over regulation.

NHTSA’s recommended “Safety Assessment” covers 15 criteria, from “Data Recording and Sharing” to “Object and Event Detection and Response.” The agency doesn’t stipulate metrics and in some cases tosses the hard choices for “Highly Automated Vehicles” to the industry.

For example, under “Ethical Considerations,” the paper shies away from a bright-line rule like, say, “A self-driving car may not injure a human being or, through inaction, allow a human being to come to harm.” Instead, it admits that when a self-driving car can only protect one person at the cost of another, its programming “will have a significant influence over the outcome for each individual.” Yes, it will.

NHTSA counsels against expecting people to take over after a software malfunction: “human drivers may be inattentive, under the influence of alcohol or other substances, drowsy or physically impaired in some other manner.”