By Bill DeVries
A slew of recent announcements suggests that the self-driving cars of the future have arrived. Google sibling Waymo is rolling out paid autonomous taxi rides in San Francisco and Los Angeles. Elon Musk announced plans to unveil Tesla’s robotaxi this October.
This momentum is enough to make city planners hopeful for an imminent transformation. But widespread adoption of autonomous vehicles could be a long time coming. Before self-driving cars can live up to their potential, we need major changes to our physical infrastructure, laws, and the way we think.
Currently, the United States sees about 40,000 traffic fatalities a year, mostly due to human error. Transportation emits more greenhouse gasses than any other sector, and traffic jams cost drivers countless hours.
Autonomous vehicles paired with well-designed roads can help solve these problems. But to achieve full effectiveness, cities need to incorporate sensor-enabled “smart roads.” These can be dedicated lanes for autonomous vehicles that synchronize their movements and provide information about potential obstacles.
Getting those roads built is just one challenge among many.
For autonomous vehicles to be widely accepted, they need to protect passengers more effectively than human-driven cars. That requires the vehicle’s software to be trained on every possible situation it could encounter.
You can supply self-driving cars with this information through test driving, which is what most autonomous vehicle companies do now — they drive around sensor-equipped cars to map streets and learn how to detect objects. But this takes a long time — by one researcher’s estimate, autonomous vehicles would need to log 8.8 billion miles to acquire all the necessary information.
The alternative is to use visual simulation. Based on real-world data, this technology models a physical streetscape and produces a virtual copy, known as a “digital twin.” Simulations also allow companies to test dangerous scenarios like icy roads and collisions.
Let’s not forget the challenge of manufacturing the cars themselves at scale. Since they use new and complex technological systems, they require a more sophisticated design and production approach. Virtual models and simulations help ensure these technologies work properly before building expensive prototypes.
Solutions to other challenges may be slower to coalesce.
The lack of appropriate regulations could slow adoption of autonomous vehicles. Right now, there are no federal rules governing this market. That might sound like a good thing. But in the absence of federal rules, companies will be left with a patchwork of conflicting state laws. This lack of consistency will create challenges for manufacturers and could undermine public trust, slowing down the adoption of self-driving cars.
Perhaps the greatest challenge will be arriving at a shared set of ethics. Autonomous vehicles may not fall asleep at the wheel, but there will still be accidents, forcing regulators to rethink the human-car relationship. Which party bears the blame when an autonomous vehicle is involved in a collision: the car manufacturer, the software maker, or the person in the vehicle?
If that problem weren’t thorny enough, there’s the question of how autonomous vehicles should behave when a collision becomes unavoidable. Customers want a vehicle that defends its passengers. Yet cars that prioritize the lives of their riders result in more total fatalities, studies show. Which set of machine behaviors should manufacturers install?
Autonomous vehicles can deliver safer roads, greater mobility, and a more sustainable and equitable transportation system. But it won’t do any of those things unless we address these challenges head-on.
Bill DeVries is the vice president of North America customer solution experiences at Dassault Systèmes. This piece originally ran at Futurride.com.