In March, a self-driving Uber car hit and killed Elaine Herzberg in Arizona, marking the first known pedestrian killed by an autonomous vehicle. This, unfortunately wasn’t the first fatality related to self-driving cars; in 2016, Tesla’s Model S car was in autopilot mode when it collided with a large 18-wheel lorry, killing the Tesla’s “driver”.
The last few years has seen automakers and tech companies including Ford, General Motors, Tesla and Google’s Waymo accelerate their investments and efforts into research as they race to develop commercially viable self-driving cars. In November of last year, UK Chancellor Philip Hammond pledged that driverless cars will be on UK’s roads by 2021.
Whilst advocates claim self-driving vehicles promise a host of benefits, such as reduced traffic and increased safety over human-driven cars, potentially minimising road accidents by up to 90%; the latest fatality reignites further debate around the maturity, ethics and legal culpability of autonomous vehicles.
Despite the prospect of fewer accidents autonomous cars could bring about, difficult and complex ethical decisions still need to be made around the value of human life, fairness and morality. Underpinning the development of self-driving technology is deciding how to programme cars to react in emergency situations through “crash optimisation” algorithms which essentially determine how to optimise an inevitable crash to cause the least harm or damage.
In this way, driverless cars bring to life the trolley problem, a series of philosophical thought experiments that inspects, tests and exposes some of our deepest ethical intuitions on how we value lives and moral responsibility. You can read more about the trolley problem here, but the classic problem is a choice between allowing five people to die by an oncoming trolley or pulling a lever to divert the trolley to another track killing one person. The problem exposes the tension between a utilitarian desire to save the greatest number of people versus actively doing harm and thus taking moral responsibility for killing one person.
Although the specific scenarios discussed in trolley problem debates may be farfetched, and will hopefully be a rarity in reality, such decisions and judgements around valuing some lives over others need to be programmed into the crash optimisation algorithms regardless. For any given potential crash, the risks to all actors (drivers, passengers and pedestrians) need to be evaluated and a course of action needs to be pre-programmed, requiring deliberation that human drivers are not afforded in an emergency.
Taking a utilitarian approach, assigning the same value to all people, we would be inclined to optimise any crash to save as many people as possible. This may sound instinctive to most, until you recognise that such a decision could very well be at the expense of the driver or passengers of the car they own (for example, a car swerving off a bridge rather than crashing into another car or a bus full of people).
According to a 2016 study published in Science, the majority of people who previously supported a utilitarian approach, then wanted to protect passengers at all costs when they were to imagine they occupied the car. This makes sense, since the owner of the car may reasonably expect their car to protect its own passengers above all other unknown vehicles. In addition, people might be reluctant to purchase a car that is programmed to sacrifice its owner. However, this intuition may only be applicable to privately owned vehicles, not holding for publicly owned vehicles such as public buses, for instance. It would also be interesting to consider how moral calculations would change with the shift to transportation-as-a-service options over traditional car ownership for new generations of would-be drivers.
It would be reasonable to hold that pedestrians should always be protected in all crash situations. Investigations into Uber’s fatal crash in March report that the self-driving software detected the pedestrian but did not react and stop in time, partly due to a deliberate delay that makes allowances for false positives, such as plastic bags or other roadside debris. As well as examining such allowances, this, coupled with a desire to protect pedestrians at all costs could prompt discussion for a greater separation in traffic infrastructure between pedestrians and drivers.
Ultimately there are no clear cut right answers to any of these questions, and companies may tackle these fundamental dilemmas differently, within the confines of the limited regulation legislated. These decisions ought to be transparent to consumers and the general public as the technology develops.
However, whilst I believe algorithm transparency is vital, there are ways in which this could cause perverse behaviour en masse. If the crash response systems are well-known, this could lead to abuse and misuse, for example, other drivers may attempt to cheat it by cutting in front, knowing that the car will slow down or swerve to avoid an accident. More morbidly, someone could intentionally put themselves in the path of an autonomous vehicle to harm the passengers.
Further, consider the scenario where a car has to choose between hitting a motorcyclist wearing a helmet versus a motorcyclist without a helmet. If we hold that all cars were programmed to optimise car crashes on the basis of harm reduction, then the car may swerve towards the motorcyclist wearing the helmet, as statistically they are more likely to survive with lower fatalities. It would seem unfair to penalize helmet wearing motorcyclists, especially in countries where helmets are legally required. This could encourage motorcyclists to deliberately decide not to wear helmets if they knew that majority of cars are programmed to hit them when wearing their helmets.
Beyond these scenarios, there are other concerns that raise questions about the ethics of autonomous cars, from advertising potential (prioritise driving past certain shops over others) to security and privacy concerns to name but a few.
The latest casualty also raises the question about whether it is ethical to deploy this technology on public streets, unleashing it on to people without their explicit consent or knowledge. Informed participant consent is a hallmark of scientific or medical experiments and yet seems to be lacking when it comes to testing self-driving cars on public roads, putting people at potential risk.
Ultimately autonomous cars will have an array of implications on our societies, cities’ infrastructure and a number of industries, transforming our economic, political and cultural landscapes. This is an inevitability, just as the invention of cars had done so before, making provision for new forms of work, suburban living and drive-through restaurants for instance.
But concerns persist around autonomous vehicles, ranging from security to insurance and legislation, with perhaps the biggest in the wake of the death in Arizona is consumer and societal readiness, which may take longer to resolve than the technology to develop.