At the World Knowledge Forum on October 17 in Seoul, South Korea, professor Amnon Shashua, MobileyeCEO and Intelsenior vice president, described a mathematical formula developed by Mobileye that reportedly ensures that a "self-driving vehicle operates in a responsible manner and does not cause accidents for which it can be blamed."
Shashua and colleague Shai Shalev-Shwartz developed the formula, which can bring certainty to the open questions of liability and blame in the event of an accident when a vehicle has no human driver. Their proposed Responsibility Sensitive Safety model provides specific and measurable parameters for the human concepts of responsibility and caution and defines a "Safe State," where the autonomous vehicle cannot be the cause of an accident, no matter what action is taken by other vehicles, according to Intel.
In the academic research paper, On a Formal Model of Safe and Scalable Self-driving Cars, authored by Shashua, Shalev-Shwartz, and Shaked Shammah, it states that in recent years car makers and tech companies have bene racing towards self-driving cars, and that the main parameter in the race is who will have the first car on the road. The goal of this new formula, according to the authors, is to add to the equation two additional crucial parameters.
First is the standardization of safety assurance, including the minimal requirements that every self-driving car must satisfy, and how these requirements can be verified. The second parameter is scalability—engineering solutions that lead to unleashed costs will not scale to millions of cars, which will push interest in this field into a niche academic corner and drive the entire field into a "winter of autonomous driving," suggests the authors. The first part of the paper proposes a "white-box, interpretable, mathematical model for safety assurance," while the second part describes a design of a system that adheres to the authors’ safety assurance requirements and is scalable to millions of cars.
In the talk at the forum Shashua called upon the industry and policymakers to "collaboratively construct standards that definitively assign accident fault" when human-driven and self-driving vehicles inevitably collide. He explained that all the rules and regulations today are framed around the idea of a driver in control of the car, and that new parameters are needed for autonomous vehicles.
"The ability to assign fault is the key. Just like the best human drivers in the world, self-driving cars cannot avoid accidents due to actions beyond their control," he said. "But the most responsible, aware and cautious driver is very unlikely to cause an accident of his or her own fault, particularly if they had 360-degree vision and lightning-fast reaction times like autonomous vehicles will."
With this new model, self-driving cars would only operate within the framework defined as "safe" according to clear definitions of fault that are agreed upon across the industry and by regulators.
Sam Abuelsamid, senior research analyst with Navigant Research’s Transportation Efficiencies program, who was briefed on the topic, commented: "As regulators and policymakers around the world struggle with how to manage the deployment of automated driving without stifling innovation, having a common, open method of evaluating the efficacy of the technology seems like a good starting point."
"The Responsibility Sensitive Safety model proposed by Mobileye," he said, "seems like a viable place to start the conversation. At least as an evaluation method, it doesn’t lock anyone into specific technologies while also providing a good framework for the decision-making process within control systems."