By Sumeet Saini, Head of Research of the Artificial Intelligence Society at King’s College London
Self-driving vehicles are possibly the most known area of AI research to the general public. The prospect of having vehicles that can travel to destinations without human intervention has captured our imagination as a society for years. Elon Musk, when asked about the future of self-driving vehicles in 2019 stated: “A year from now, we’ll have over a million cars with full self-driving, software, everything…”. With modern technology and the support of major companies like Google and Tesla, there remains one question: where are self-driving cars?
Before discussing possible obstacles holding back progress in this domain, it is important to understand how self-driving vehicles are classified. In 2018, the Society of Automotive Engineers (SAE) published a paper in which they identified six different categories for self-driving vehicles. These are:
- Level 0: Full manual driving (Emergency braking allowed)
- Level 1: Limited driver assistance (Cruise Control)
- Level 2: Partial Automation (Steering and acceleration)
- Level 3: Conditional Automation (Defers to driver for difficult decisions)
- Level 4: High Driving Automation (Full self-driving in limited areas)
- Level 5: Full Automation
In levels 2 and 3, drivers must still be alert and ready to take over. Vehicles in level 3 can make more nuanced decisions based on their environment as compared to level 2. Until now, consumer vehicles are only available with level 2 capabilities, due to technical limitations or legislation, with Tesla’s Autopilot being a well-known example. As level 4 vehicles have their self-driving capabilities bound to a set location, public transport vehicles like buses or taxis would be the perfect fit for this technology. Google’s Waymo has already been making strides in this area, having automated taxis operate in Phoenix, Arizona. Level 5 vehicles will be able to drive in any area with the same or a greater level of competency as a human driver, but there have been no successful attempts thus far.
Reaching level 5 will require a large leap in technology. As clever as AI systems can be, there are still limitations. AI systems learn through data, therefore a large quantity of high quality data is needed for an effective system. As mentioned, level 4 vehicles operate in pre-defined spaces. Vehicles have been tested in these areas extensively, so there is plenty of data to ensure safe operation is possible. The challenge comes when vehicles are expected to perform in environments where data is limited. One way to approach this would be to simply conduct experiments in every single location to gather the necessary data. This would take an excessive amount of resources, and there is no guarantee that some locations would not be missed, inhibiting the system from being effective in all areas. There are also specific circumstances that would arise for which the system has no data, leading it to make a possibly uninformed choice. One solution to this would be to focus on creating systems which can transfer knowledge from one domain to another, but this is a difficult task that still requires research.
Another point to consider is security. In terms of physical safety and security, vehicles are extensively tested and there are decades of research to draw from. Now that vehicles will be using AI systems, there is a risk of cyberattacks. Researchers conducted a study in which they identified a ‘Poltergeist’ attack. To perform this, acoustic waves were used to interfere with the vehicles sensors, causing disruption to the way the vehicle processes the environment. An attacker could use this system to make objects ‘disappear’ from view, potentially causing life ending accidents. Even something as simple as adding paint to roads to simulate markings could have a similar effect. Before these vehicles can be safe on our roads, the issue of security needs to be addressed.
Finally, there are tremendous ethical concerns associated with autonomous vehicles. The Trolley Problem is a famous ethics experiment in which there is a train which can take two paths. The first path will lead to the death of a single person, while the second will lead to the death of multiple people. In this scenario, most people would agree that greater loss of life should be avoided. Unfortunately, not all scenarios are as clear cut. What if the single person is a child, and the group of people on the second path are homeless. Would the decision change? In the case of driverless cars, would the system choose to save the life of the driver or a pedestrian. On one hand, pedestrians should not be punished for a decision the car makes. On the other, why would a consumer purchase a car which protects others more than themselves? Such questions have no correct answer, as it depends on an individual’s ethical opinions. Driverless cars would require an ethical code to be explicitly coded into the system. This raises a whole wealth of issues. Different countries have different cultures and ethics, so it is practically impossible to create a system that will please everyone. Developers of self-driving cars only represent a minuscule percentage of the entire population, so the ethics they encode in vehicles might not be representative of the entire population’s view.
Blame should also be considered. If an accident is to occur, who should take responsibility? It could fall at the feet of the driver, the engineers, the manufacturing company, or even the victim. In a court of law, when dealing with a car accident, the accused will often be investigated to one level of depth. This means they will be asked why they chose to act as they did, and if they were under any external influences (anger, drugs etc). With software, this chain will go much deeper. Why did the car choose to make this decision and who coded this response? How much ethical training did the development team receive, and from whom? This can carry on to many levels of depth in the search of seeking a responsible party. Current legislation surrounding these issues is still developing, so we are yet to see the solution our leaders create to handle these doubts.
Self-driving cars are an exciting technology that could revolutionise trade and travel for the entire population. However, there are still many hurdles that need to be overcome to make them safe and practical. Safety and technical concerns can be tackled with a scientific approach, but what remains to be seen is how the ethics of self-driving cars will be handled. Providing a safe experience for drivers and pedestrians alike is the top priority, but it may be many years before we see a true driverless revolution.