Putting the brakes on our driverless future

As automated vehicles become more widespread, it will grow necessary to regulate and address the legal implications of their use.

Getty Images

No one questions the idea that self-driving cars are an inevitable part of the future. Just about every auto manufacturer in the world is working on their version of an autonomous vehicle (AV), be it Ford, Tesla or Toyota.

But as the technology advances and manufacturers test their platforms on open roads, it becomes crucial to sort through the regulations and legal implications before widespread adoption ensues.

One pressing question is what to do when people are injured or killed in an accident with an automated driving system. Earlier this month, a development in a Los Angeles courtroom could become a landmark case on that perplexing matter.

First we need to rewind to December 29, 2019, when a Tesla on autopilot in Gardena exited a freeway, ran a red light and crashed into a Honda Civic, killing both the Civic’s driver and passenger instantly.

Two years later on January 19, 2022, prosecutors in LA County filed two counts of vehicular manslaughter against the driver of the Tesla, 27-year-old Kevin George Aziz Riad.

Riad’s felony prosecution is believed to be the first in the US where a driver is accused of being responsible for a fatality while using a partially automated driver-assist system.

Riad’s attorney did not respond to TRT World’s request for comment, nor did Tesla.

Should Riad be found guilty, it could set a precedent for human responsibility in an accident involving the current crop of autonomous driving systems.

Most commercial AVs in operation are classified as “Level 2” of vehicle autonomy, meaning that it can perform tasks like steering, acceleration and braking under certain circumstances, but with a driver ready to take over at any moment.

When operating driver-assisted systems, the National Highway Traffic Safety Administration (NHTSA), a federal body under the US Department of Transportation, places the onus on those in the driving seat.

“Every vehicle sold to US consumers still requires the driver to be actively engaged in the driving task, even when advanced driver assistance systems are activated,” NHTSA Board Chairman Robert Sumwalt said in a statement.

Getty Images

The US National Highway Traffic Safety Administration opened a probe into Tesla’s Autopilot software last year, citing the cars’ repeated collisions with parked emergency vehicles. The NHTSA investigation covers Tesla Models Y, X, S, and 3 vehicles released from 2014 through 2021.

Setting aside the lack of legislation that controls autonomous vehicle companies or regulates these systems, public awareness remains behind the eight ball. Between automakers, tech companies, regulators and legislators, messaging to all stakeholders involved on the technicalities of AVs has been muddled at best.

According to Sohan Dsouza, incidents like the Gardena crash might end up forcing authorities to put rules in place and limit where self-driving modes can be engaged, in addition to clarifying legal liability.

“Potential legal hazards like this could result in re-evaluations by automakers of where drivers are permitted to engage self-driving mode, perhaps calculated to balance maximising driver convenience with minimising manufacturer liability,” Dsouza, a former member of the Moral Machine research team at the MIT Media Lab, told TRT World.

However, he argues that limiting the range of self-driving modes could backfire on strengthening safety measures and end up resulting in more crashes.

Shu Kong, a postdoc fellow at The Robotics Institute at Carnegie Mellon University, believes the best way to wrap one’s head around legality in events involving autonomous vehicles is to first think if a human driver can avoid, say, a sharp turn by a motorcyclist.

“If not, we should treat the autonomous vehicle as a normal human-driving car,” Kong told TRT World.

Might the Tesla autopilot case act as a wakeup call for those asleep at the wheel of their quasi-robotic saloons?

Mary Cummings, a professor at Duke who studies the interaction between humans and autonomous driving systems, believes there’s an unrealistic expectation on humans to be attentive on the road and react when a semi-autonomous system needs intervention.

“We can’t sustain attention, especially in boring environments like highway driving,” Cummings told Business Insider. “Expecting the human to be able to just step in when we know they haven’t been paying attention is a huge problem.”

Loading...

The thorny issue of liability

While assessing liability at or below Level 2 automation is one thing, what happens when we reach the stage of fully automated driving and there’s an accident? Is the driver at fault, who never had any control of the vehicle in the first place? The AI system developer, who created the driving software? Or the manufacturer, who assembled and supplied the vehicle?

For Selin Cetin, secretary general of the IT Law Commission at the Istanbul Bar Association, it would have to be evaluated on a case-by-case basis.

“Because an AV must work in harmony with sensors, hardware, software while driving, it is possible that an accident could be caused by software and organisational shortcomings of the AV system,” Cetin told TRT World. In such an instance, it would be hard to hold a human responsible, she says.

Then there are situations where the person is liable, like being required to update the AV system. Failure to do so could see blame allocated to the driver.

Manufacturer liability comes into play when a system is not developed in accordance with required safety standards, nor does it provide necessary information to the driver.

It also ties in with local authorities, who are responsible for ensuring highway safety.

Given all these variables, “a delicate balance should be considered between these players and situations, both during the regulation processes and a single legal dispute,” Cetin contends.

Then there’s the question of insurance. Once you remove the act of driving and give control to an automated system, how can we determine what constitutes safe versus risky driving?

Appropriate regulations are crucial here, and some countries have already passed AV laws that cover insurance, setting safety standards and vehicle testing. Last July, Germany passed the Autonomous Driving Act, while the UK did back in 2018.

According to the UK’s Automated and Electric Vehicles Act, an insurer will be liable for all damages caused from an accident if it involves three of the following conditions: if the accident was caused by an AV while driving itself; if the AV has insurance at the time of the accident; if an insured person or someone else has been damaged due to the accident.

While uncertainties remain and attributing liability is not always going to be clear cut, Cetin stresses the importance for there to be regulation “that can meet the working principles of AV systems to avoid unlawful violations for each party.”

Ultimately, Dsouza believes it will require a deliberative approach.

“This is a complex, evolving, feedback-riddled problem that will have to be solved with further research on real-world outcomes for road safety as the tech advances, and in parallel conversations among ethicists, lawmakers, industry, consumers, and the public,” he said.

Other

Baidu's self-driving robotaxis are tested on the street at Yizhuang town on August 30, 2021 in Beijing, China.

Bumps in the road

Safety is one of the motivating factors fueling the idea behind our driverless future.

There are over 1.3 million fatalities worldwide from road crashes every year, and nine out of ten of those accidents are said to be triggered by human-error.

The solution, self-driving advocates say, is to take humans out of the equation. A widely cited report from McKinsey predicted that by mid-century, computer-driven cars could reduce road accidents by 90 percent.

Equipped with lidar sensors, high-definition maps including GPS and artificial neural networks, we are told that AVs will liberate us from bad driving and be more reliable than their reckless human counterparts. Former executive director of the Center for Automotive Research at Stanford, Stephen Zoepf, put it bluntly: “Computers don’t get drunk”.

Meanwhile, the race for driverless supremacy is not shaping up to be a war between tech companies and automotive giants. Rather, the two have been teaming up.

Both Apple and Uber have purchased AV startups, and Uber has worked with Volvo to roll out tens of thousands of self-driving cars. Google’s self-driving car division, Waymo, purchases vehicles from Jaguar Land Rover and Chrysler. Honda is collaborating  with Cruise, General Motors’ driverless division, while Huawei is offering in-house AI chipsets for several joint ventures. Softbank via its $100 billion Vision Fund has funnelled billions into AV development at Uber, General Motors and Toyota.

But for all the heady forecasts that we’d all become permanent backseat passengers by now, the industry has had to confront some hard truths.

For one, given the immense engineering challenges and data required, AVs are going to take much longer to reach mass scale than previously assumed.

In 2019 Tesla CEO Elon Musk, who has a long history of overpromising and under delivering when it comes to his company’s “full self-driving” software, proclaimed there would be over a million fully self-driving cars hitting the roads by 2020. Musk was forced to temper his enthusiasm last year, admitting that self-driving was “way harder” than he thought.

While the robotaxi craze has also started to cool down, Waymo, Cruise and Baidu are still banking on the proliferation of autonomous ride-hailing vehicles before the end of the decade.

With full autonomous driving not yet ready to become the dominant form of transportation, investor sentiment has taken a hit too.

Companies like Waymo, which enjoys a commanding lead in the AV world, have seen their valuations take a massive hit as self-driving systems haven’t matured as quickly as anticipated.

For that future to arrive, investors will need to pony up huge sums and be willing to tolerate zero cash flow until the technology is safe enough to launch, and few will have deep enough pockets to take on that challenge.

If – and when – the tech is ready, the transportation industry would experience a disruption that would cost millions of driving jobs while adding new AV roles from tele operators to remote assistance drivers, as companies flesh out the commercialisation of self-driving tech.

Loading...

Modelling the real-world

At a more fundamental level, how autonomous can we expect self-driving cars to be?

The technology hinges on an array of environmental, technical and social infrastructures from satellite signals, sensory feedback, fuel stations to legal frameworks.

Take the case of Elaine Herzberg, the first pedestrian killed by an AV in 2018 when one of Uber’s vehicles struck her in Arizona. According to an investigative report by NTSB, the car’s sensors detected Herzberg, but since she was walking her bike on a crosswalk, the software initially “classified her as an unknown object, then as a vehicle, and finally as a pedestrian.”

Media reports instead attributed the cause of death to a human safety inspector’s failure to notice Herzberg and override the system.

If the purpose of driverless cars is to overcome human error, such fatal AV incidents underscore how much these systems are reliant on human monitoring and supervision.

Furthermore, evaluating data is critical to understanding how a machine learning system will operate once deployed in the world.

First, large-scale datasets are collected that demonstrate how human drivers behave under different scenarios and when surrounded by different obstacles. Engineers then build a model to abstract driving patterns demonstrated by that dataset.

“However, no matter how hard they collect the datasets, we are almost sure that they do not capture all the possible scenarios encountered in the real world,” Kong said. “As a result, the model built on such datasets will not perform reliably well in novel scenarios.”

So, if a self-driving AI’s testing data doesn’t include jaywalking pedestrians, it cannot deliver an accurate model of how the system will perform when it encounters one in the real-world.

With that in mind, Kong maintains that human drivers should take full responsibility for their safety if purchasing AVs for now.

“I treat semi-autonomous systems like a module for fun, just like a CD player in cars.”

Route 6