How can automakers make AI safe for drivers and pedestrians?
A mix of international cooperation and local solutions could help AI developers overcome unresolved security issues. By Jacob Moreton
It is widely believed that artificial intelligence (AI) will soon take its place at the center of automotive development, but is that future certain? The road to autonomous and intelligent vehicles remains strewn with obstacles and obstacles, in particular safety requirements.
How can automakers and software developers ensure that safety is at the heart of future automotive AI innovations?
Building safe AI systems is “clearly a very difficult problem,” said Stan Boland, CEO of Five AI, speaking at the standalone main event, a virtual conference for auto stakeholders. -conduct. The key problem, he explained, is that the systems AI is built on would always contain an element of error. “It’s always going to pose the challenge of how we build systems and how we can confidently meet a level of safety criteria that allows us to put them on the road,” he said. One possible solution could be a so-called “collection system contract,” he argued. If developers can recognize the existence of an error, they can determine what level of error is acceptable and design a system resilient enough to overcome errors within a certain threshold.
Jan Becker, Managing Director of Apex AI, agreed that an error-free system was essentially impossible. Since AI is a learned model that teaches a system how to react based on certain inputs, there would always be some unknowns that weren’t available during the training process, he said. But the problem was not only technical; what level of error could the industry and the public accept? “There will never be a perfectly safe solution. But it will still be a lot safer than humans are today, ”Becker said.
Learning and development
But AI systems aren’t just trained once and released. Autonomous systems “develop understanding, make decisions, and assess their confidence based on the training data provided to them,” says AI development firm Appen. The better the training data, the better the performance of the model.
For this reason, innovations such as over the air updates (OTA) are crucial, said Georges Massing, vice president of digital vehicles and mobility at Mercedes-Benz. It was a question of scale, he said: the more data a vehicle receives, the better it can understand the environment around it. OTA updates also allow automakers to develop and update safety features on many units at once – a June 2021 update from BMW has upgraded the software to over 1.3 million. vehicles.
A continuous flow of data to the car can also help systems adapt to new environments, which is especially important in ensuring high levels of security for companies operating in different markets with disparate security expectations. In Germany, for example, it is not considered safe to see a car or bicycle heading towards the vehicle, Massing said, while in China it is considered normal. If an autonomous vehicle (AV) developed in Germany were brought to China, it would perceive certain behaviors that other drivers consider normal to be dangerous.
There are even significant variations in security between sites in Western Europe. Actions considered dangerous in Germany are unacceptable even in neighboring Austria, Massing said. “If you drive to the Eiffel Tower in France, refusing to give way is normal. For us in Germany, it’s a mess. AI must therefore adapt to different cultural points of view.
These differences in local expectations mean that there is no solid definition of what exactly is acceptable or safe. For this reason, Boland said, the industry needs to formally define rules of conduct, whether in terms of safety, comfort or technology. This would then allow automakers and suppliers to measure the results, with precise data, and use that information to refine regulations. A framework could be global, Boland said, but specific local rules would still be needed.
On the other hand, Tatjana Evas, head of legal and policy at the European Commission, said the European Commission is prioritizing international discussions – at least within the bloc – with its April 2021 proposals for a law on artificial intelligence. This law would regulate AI systems across the EU single market through product safety rules on AI systems, including VAs. Evas said the Commission is also working with standards bodies to define the meaning of human oversight of AI.
Integrating security into AI requires extensive testing. But automakers also need to test vehicles on the road without compromising the safety of pedestrians or other drivers. How should the industry approach the testing process?
We must enrich the simulation, and therefore build systems including intelligent agents capable of exploring the behavioral elements of a system
For Gary Hicok, Senior Vice President at Nvidia, the answer lies in simulation: “Everything must be tested in simulation before proceeding with a test drive. Next, we’ll test the tracks to make sure the vehicle, operating environment, and software are all good. Then finally we put test pilots in the vehicle. During the testing phase, while we are checking it out and making sure it is working fine, there is no reason to take any chances. “
Meanwhile, Boland cited test projects in the United States, such as those by Waymo, Cruise, Aurora and Nvidia, as examples to follow. In fact, Waymo developed a second virtual test system, Simulation City, after discovering gaps in its capabilities. The simulation has to be “event-driven,” Boland said. “We need to enrich it, and therefore build systems that include intelligent agents capable of exploring the behavioral elements of a system.”
Ultimately, neither the auto industry nor the IT industry has a complete answer to the problem of safe AI, Boland argued. While the automotive industry is great for designing and testing secure systems and making resilient choices, IT has its own strengths in building cloud-native systems and applying machine learning to problems. The two will have to come together.
But putting this into practice in Europe is still a work in progress, added Boland, as the industry is in conflict over which approach it will ultimately take to software development. Some companies seek to emulate Apple’s example by developing all hardware and software under one roof, while others intend to integrate capabilities from a variety of sources.
Many industry leaders are unsure of the way forward, or even feel equipped to make changes: According to McKinsey data, only 40% of research and development leaders who see software as a major disruptor feel ready to do whatever is necessary. changes to operating models. Ultimately, the auto industry must decide which development path it will take towards AI security, Boland said. “Once it’s decided, it will get easier. “