Written by John M. Simpson, director of Consumer Watchdog’s Privacy Project. This is one in a series of periodic guest columns by industry thought leaders.
The new federal policy on self-driving cars and trucks recently released by the Obama administration is headed in the correct direction. The guidance issued last month requires a safety assessment before automated vehicles are deployed and is not the complete sellout to industry that many safety advocates had feared.
The National Highway Traffic Safety Administration also took an important enforcement step by issuing a guidance bulletin with the new policies making it clear that it can act and demand a recall when an autonomous feature or vehicle poses a safety threat.
But the nation still needs strong, enforceable Federal Motor Vehicle Safety Standards for the autonomous driving features starting to be equipped on cars and trucks and eventually for fully self-driving vehicles.
When manufacturers build cars and trucks, they must meet specific standards for crash protection – for example, providing seat belts, air bags and adequate bumpers. Trucks are required to have rear underride guards. There’s no reason why autonomous driving features on cars, trucks and commercial vehicles should not also have to meet predetermined criteria and measures.
Key to the National Highway Traffic Safety Administration’s new policy is the 15-point safety assessment calling on developers of self-driving technology to explain such things as how the robot deals with ethical issues – would a computer be left to make life-and-death decisions when a collision is imminent? Developers must also detail where their robot cars are supposed to function; how the software perceives and responds to objects; what the vehicle does if the robot technology fails; how it will comply with federal, state and local laws; and how the robots protect the occupants’ privacy.
All 15 safety-assessment points are vital issues that any self-driving technology developer needs to consider and must deal with appropriately. Certainly the public ought to know how they propose to do it. And we will, if the manufactures give the requested information to NHTSA. That’s the rub. For now, filing those 15-point safety assessment reports is strictly voluntary, and you can only hope the manufacturers do the right thing. The good news is that NHTSA plans to implement a rule that would make the reports mandatory. It’s imperative that happens quickly.
It’s also important to understand that NHTSA doesn’t even try to deal with one of the most troubling policy issues raised by autonomous vehicles. If robot cars and trucks are successfully deployed, they will be job killers. Truck driving is one of the few remaining good-paying blue-collar jobs. What are 1.7 million truck drivers supposed to do when the robots take over? Uber and Lyft are hurting traditional taxi companies. What happens when the ride-hailing industry transforms into the robotic taxi business?
The next question is whether the next administration will remain committed to the policies just spelled out by NHTSA. The Federal Automated Vehicles Policy commits NHTSA to an open and transparent process and seeks public comment on the proposal by Nov. 22, just after the election. But for now let’s assume NHTSA has kick-started the federal automated vehicle regulatory process and the agency continues on the road it has mapped out toward its professed goal of striking a balance between promoting Highly Automated Vehicles, or HAVs, while protecting our safety.
At Level 3, the robot vehicle can do the driving in certain conditions without the human driver paying close attention. When the car encounters a situation it can’t handle, it turns control back to the human driver – supposedly with enough warning to take the steering wheel. Levels 4 and 5 don’t envision human intervention. Those vehicles might not even have steering wheels or pedals. The difference is that Level 4 vehicles would be limited to certain conditions or areas. The Level 5 robot car or truck would – if one is ever actually successfully developed – do anything a human driver can do anywhere a human can drive.
The autonomous vehicles being tested now are operating at Level 3. So-called truck platoons – where tightly contained, digitally connected packs of two to five trucks drive in a tight formation to reduce wind resistance and increase fuel efficiency – could operate at Level 2 or Level 3.
The main problem with NHTSA’s 15-point safety assessment is that it doesn’t specify standards detailing how the robot car developers must deal with the 15 safety issues. It just says they should tell how they did it. If the assessment report becomes mandatory, they’ll be required to make the report but not meet specific standards.
It’s a start, but not nearly enough. NHTSA needs to launch a formal rulemaking process to develop key Federal Motor Vehicle Safety Standards that apply to automated technologies. A good example is automatic emergency braking. Carmakers have agreed to introduce the life-saving technology voluntarily, but it ought to be required by law. A recent study by the Insurance Institute for Highway Safety found that the technology reduces rear-end crashes by as much as 50 percent.
Consumer Watchdog, the Center for Auto Safety and former NHTSA Administrator Joan Claybrook have filed a joint petition seeking a rulemaking that would set an automatic emergency braking standard for passenger vehicles.
It’s not at all clear that self-driving Level 4 or Level 5 robot cars without a steering wheel and pedals can ever be successfully deployed. It is, however, not at all too early to begin crafting federal safety standards that would cover how such a robot vehicle would have to operate before it was allowed on the road. The agency needs to move more quickly to issue rules that specifically cover self-driving cars and trucks. For instance, NHTSA envisions data sharing between self-driving car developers so they can learn from one another’s mishaps. For now, that function is voluntary but should be required.
NHTSA’s Federal Automated Vehicles Policy also attempts to define what regulations are federal responsibilities and which belong to state governments. It says that states should handle licensing, traffic laws and regulations, and safety inspections as well as regulate insurance and liability. “States,” NHTSA says, “may determine that in some circumstances liability for a crash involving a human driver of an HAV should be assigned to the manufacturer of the HAV.”
Many safety advocates feared that federal regulators such as Department of Transportation Secretary Anthony Foxx and NHTSA Administrator Mark R. Rosekind had completely succumbed to the self-driving car developers’ hype. That concern has been fueled by the revolving door between NHTSA and the industry, which has seen at least four former top NHTSA officials take jobs promoting self-driving cars.
It’s good that the new policy shows the outlines of where regulation of autonomous vehicles is headed. It’s not the green flag allowing the industry to irresponsibly develop robot cars that many safety advocates had feared. The question now is what happens after President Obama leaves office. Will the next administration step in and put policies in place that allow the development of autonomous vehicle but still protect the public adequately?
Editor’s note: John M. Simpson is director of Consumer Watchdog’s Privacy Project. Before that he was a journalist, holding several positions at USA Today, including deputy editor.