Put Driverless Cars Back in the Slow Lane

Put Driverless Cars Back in the Slow Lane

The endless loop of driverless car hype continues, running autonomously of objective data and due scrutiny. Following the release of safety data from California, leading news outlets have proclaimed  “self-driving improvements” and that “self-driving cars need humans less and less.” At a time when GM is pushing to ditch manual controls as early as next year and California is planning to allow totally autonomous vehicles on the road in a few short months, these reports are a critical way for the driverless industry to showcase safety improvements. 

Unfortunately, scrutiny of the data and collection methods shows that improvements to autonomous technology are wildly uneven. Submitted data and associated reporting irregularities undermine the idea that autonomous technology is ready to take the wheel from human drivers. Continued reliance on voluntary reporting, rather than the state-monitored testing demanded of human drivers, will lead to gargantuan costs to taxpayers via emergency assistance and infrastructure damage — and a high human toll as well. 

To backers and observers of autonomous technology, California’s annual release of “disengagement” data is an important barometer of progress made in driverless vehicle safety and reliability. The Golden State has a reputation for being a regulatory stickler when it comes to autonomous technology. Companies logging driverless miles on public roads are required to submit data logging the instances in which a tester had to manually halt autonomous operations due to either a technological failure (i.e., a software or hardware issue) or unsafe driving on the part of a vehicle. Analysts have largely focused on Waymo (formerly Google) data, since the company now has three years of logged data and has accumulated the most miles on public roads by far. 

The company’s headline figures since 2015 are certainly encouraging, with “all reported disengagements” dropping from .80 per thousand miles (PTM) driven to .18 PTM. Broken down by category, however, this four-fold decrease in disengagements appears very uneven. While the rate of technology failures has fallen by more than 90 percent (from .64 to .06), unsafe driving rates decreased only by 25 percent (from .16 to .12). Based on the company’s accounts, broken wires and incorrect sensor readings are behind many of these disengagements, and engineers appear to have corrected many of these technological problems. But the ability of cars to analyze situations on the road and respond has barely shown improvement since the beginning of 2016.  In key categories, like “incorrect behavior prediction” and “unwanted maneuver of the vehicle,” Waymo vehicles actually did worse in 2017 than in 2016.

The company can only claim an overall safety improvement from the previous year by including in their count reductions in disengagements due to “recklessly behaving road users.” It’s unclear, though, how decreased reckless driving by others can be considered proof of the increased safety of driverless vehicles. Waymo’s fleet operates in a fairly contained network of roads, meaning that many local drivers have had repeated interactions with the vehicles. Maybe local drivers have learned that tailgating these slow vehicles will do little to increase speed, and have gotten less hostile to robot motorists. In any case, it’s bizarre to cite the adaptability of human drivers as evidence that robot drivers are improving. And, if those figures are subtracted from the equation, safety-related disengagement rates actually increased slightly from 2016.

Granted, it’s probably not the best idea to give these figures too much credence. In November, a GM Cruise was prevented by a tester from running a red light, and the company refused to report the incident as a safety-related disengagement. This, and other reported cases, show that there’s a fudge factor at play that can dramatically alter safety rates.

Even taking Waymo’s numbers at face value, there’s little reason to believe that driverless cars are remotely as safe as human drivers. It’s difficult to translate these statistics into a judgment about the relative safety of autonomous cars. Comparing reported disengagements to human crashes is misleading, since not all disengagements situations would have led to a crash if uncorrected. To allow for a more direct comparison, Waymo estimated “simulated contacts,” or the number of disengagements that would have likely led to crashes. Based on these data, last reported in 2015, Google/Waymo vehicles would’ve crashed .02 times per 1,000 miles driven without testers. Generously assuming that the crash rate halved since 2015, Waymo vehicles are still far more dangerous than human drivers. 

Department of Transportation data show that Americans get into around 6 million crashes a year and drive around 3 trillion miles each year, implying that Waymo vehicles are 4–5 times more dangerous than human drivers. GM Cruise vehicles report disengagement rates higher than Waymo’s, making the divide between autonomous and human drivers even more stark.

In light of these data, policymakers must press for more safety before permitting driverless vehicles on the open road without human testers. This is the common-sense norm for human drivers, who must prove to the state that they are capable of performing reasonably well in everyday driving situations. If a driving test-taker proved unable to make certain turns, go a normal speed, or traverse busier roads, she would be asked to undergo additional instruction and a retest. 

And if companies like Waymo and Uber want to demonstrate to the government that their vehicles can handle a wide variety of conditions and situations, they can do so without using motorists and pedestrians as guinea pigs. These multibillion-dollar companies can increase their investments in elaborate tracks that recreate hazardous real-world conditions. Buying large tracts of land in hilly areas, building test roads, and hiring “extras” to serve as pedestrians would go a long way toward transparently proving efficacy. But insisting on public road access despite flimsy and underwhelming safety data will only lead to a tragic toll of lives and dollars.

Ross Marchand is the director of policy for the Taxpayers Protection Alliance.

Comment
Show comments Hide Comments

Related Articles