Spot the Robot! Automobiles
One of my first cars was a 1970 Chevy Malibu with a gorgeous 307 horse power V8 engine. It was anything but a robot. But if you look at the new self-driving cars, or cars with “autopilot”, you are obviously looking at a robot. Or to use the current SAE International definitions, there are already Level 3 and Level 4 automobiles on the streets.
There are lots of ways to try and make driving safer. Atos together with Renault sponsors a university chair at UPMC on connected car technologies, and driving vehicles are able to exchange gigabytes of data in seconds in order to avoid accidents – let alone the more mundane information like lane changes or traffic up ahead.
A safe ecosystem on the road– regardless of whether they are automated taxi services, self-driving long distance trucks, or your own vehicle – will require more than just artificial intelligence and car to car communications. There are many important questions about accidents and liability to be answered before we get there. It was also exactly 38 years ago that the first assembly worked died because of a robot (one of those old fashioned ones that had no sensors to say if there was a person around them) and there are already fatalities with autopilot features of cars.
The automobiles allowed on the streets now are between levels 3 and 4 (conditional automation and high automation) and it is clear that the driver is responsible. But how do we deal with responsibility for accidents with level 5 fully self-driving cars? Do you blame the car manufacturer? Do they blame the software company that programmed the logic? Or do you blame the programmer who made a generic optimization algorithm that happened to be used for accident calculations? What if the car has artificial intelligence that learns on the road – how would you punish the car? (There are actually cognitive philosophers that are asking that question).
Being able to solve legal issues of liability or negligence is important not just for the automated cars, but for robots in general that will be used routinely outside of closely controlled environment. There is no easy answer, but there are ideas about how to avoid the question.
Several countries support no-fault drivers insurance where the policyholder is paid by their own insurance company without proof of fault about the accident, but also with the restricted ability to seek additional payment through the civil-justice system (i.e. they can’t additionally sue the other involved parties). Requiring owners of robots to have a no-fault insurance policy could avoid litigation, yet insure that damages are compensated. A networked platform ensuring that a robot can only be used if it has the proper insurance and a price of insurance that reflects how safe or dangerous a type of robot, might be a step in the right direction.
Software can enforce that the robot is only operated with a valid insurance policy (think of it as a kind of “in-app” purchase for your robot). Companies will make safer robots, not only because everyone wants a safe robot, but also because the safer the robot, the less expensive the overall cost to end users.