Published on March 19th, 2018 | by Jesper Berggreen0
Will Autonomous Vehicles End Road Carnage? Eventually, Yes, But Might Be Delayed By Ethical Dilemmas
March 19th, 2018 by Jesper Berggreen
The last time I brought up the subject of autonomous vehicles I asked when they would actually arrive. The report I was referring to at the time did a very poor job of predicting this, and so did I.
This time, I will not try to predict anything specific, but I think we have to keep the heat on this subject. For how long? Until the occurrence of deaths and mutilations on the roads are sufficiently marginalized with autonomous technology that we can categorize it as history. (Note: This article was written and planned for publishing before the pedestrian death from a self-driving Uber.)
Roads are not suited for human drivers
Road accidents happen all the time, everywhere. You sit in your heavy metal box and shoot it along the road at lethal speeds without giving it a second thought, until you crash and get hurt, or hurt someone else, or kill someone. Then you think about it — unless you’re dead.
Last month, one such accident was caught on surveillance camera in Iowa, and the reason I choose to focus on this particular accident is that it shows very clearly how humans drive, and how autonomous vehicles presumably would not drive. Let me emphasize that I presume an AV would not drive the way you see on the footage. Large-scale AV tests in this regard has not yet been carried out to my knowledge.
Iowa I-35 February 5th 2018:
Visibility was obviously very low, and my guess is that grip was very low too when these people tried to avoid disaster. So many too late scenes in one event makes you think, doesn’t it?
Captain Barry Thomas with the County Sheriff’s Office said he’s never seen anything like this, and urges people to slow down. The crash involving about 70 vehicles resulted in 1 fatality and at least 3 injured. Unfortunately, this kind of thing happens all the time — just individually or in small groups rather than in a batch of 70. Working at the forensic department in my region, I know more about these tragedies than I would like to.
The ethical dilemma
The same day I became aware of the Iowa I-35 tragedy, I came across an article in the Danish newspaper Politiken written by philosopher and communications expert Thomas Telving. The article has the headline: “Should the car sacrifice you or the five pedestrians? Artificial intelligence creates dilemmas that need to be resolved before we use it.” Really? When we witness humans being such incompetent drivers resulting in carnage, should we wait?
Reminder from previous CleanTechnica article: “Tesla Automatic Enhanced Braking system … can reduce rear end collisions by 40%. If such systems were required throughout the industry, the National Highway Transportation Safety Administration believes 28,000 crashes a year could be avoided leading to 12,000 fewer traffic injuries in the United States.”
OK, so we can probably agree that safety systems are a good idea, and they should be mandatory in vehicles as soon as possible. But when talking about full autonomy, the agreements end. But why? Because we are scared to leave the ethics of having responsibility over tons of steel at lethal speeds to intelligent machines.
Of course we are scared about that! I mean, if you kill or injure someone in traffic, the incident is investigated by authorities, and whoever is found to have failed to steer clear of harm’s way will be punished accordingly. How do you punish a machine? The chance of a machine hurting anyone may be ridiculously low, but we have to ask this question before we jump into that robotaxi.
So let’s hear it from the philosopher Thomas Telving:
“A technological dilemma that many can relate to is about autonomous vehicles. If an AV has to choose between killing five pedestrians or sacrificing the driver, what should it do?
“A group of philosophers investigating this dilemma with so-called experimental ethics has revealed that a significant majority leans to save as many as possible.
“Should the majority choose, the driver must sacrifice his life. The philosophical term is utilitarianism. It has its origin with the British philosopher Jeremy Bentham and John Stuart Mill, and it states that good actions are those creating the greatest possible happiness for as many as possible.
“It is not new that utilitarianism tends to be the choice of people when making decisions in larger contexts. The news is that when we give a machine precise ethical guidelines, it reveals an inconsistency in our own ethics. While the majority want utilitarian cars on the roads, many, according to the same investigation, have more than a hard time seeing themselves buying a car that is programmed to sacrifice the driver in critical situations. If that skepticism strikes through the market, it will slow down the propagation of AVs and thereby adversely affect road safety. It puts us in the special situation that our attempt to protect our lives in traffic increases our individual risk of dying in traffic.”
Thomas goes on to explain that from the philosophical point of view, the three researchers Jean-Francois Bonnefon, Azim Sharidd, and Lyad Rahwan have worked with this question in depth. In addition to performing statistical measurements, they have developed the “Moral machine” module at MIT, where everyone can test of their own ethics with different dilemmas with AVs. Try it — it is not easy.
However, philosophers are still far from solving the basic ethical problems that artificial intelligence leads to in more general terms, Thomas explains. It is suggested that communication on car issues must focus on the major safety benefits of AVs rather than the few cases in which the driver is at risk of dying.
“This approach may lead to more AV sales, but as human beings, we are still left with some fundamental challenges with the technology: the more utilitarian perfect decisions we can make by artificial intelligence, the skisma between the sacrifice of the individual on behalf of the majority becomes uncomfortably clear.
“The question is whether the majority should show solidarity or simply welcome the fact that insurance companies can exclude expensive customers and lower the premiums for the majority.
“We must not resist artificial intelligence with fear and paranoia, but we must remember that it is more than just smart.”
Not surprisingly, Thomas Telving thinks a philosophical approach to these question is the way to go. Indeed. Thinking deeper about these dilemmas can’t be a bad idea.
Owned hardware vs shared hardware
The Thomas Telvings article did not mention one crucial thing though: shared AVs like robotaxis and buses. I mean, would you, as an AV owner renting it out or even as a fleet owner, have the same utilitarian ethical standards when you provide rides to other people? And will the consumer be concerned about the ethical programming of a robotaxi?
When was the last time you got in a taxi or a bus and discussed the driver’s ethical standpoint, or his general health and sleep pattern for that matter? I’d guess you never did. You simply trust your driver. Wouldn’t you trust a driver with 360° high-resolution high-frame rate vision that never gets distracted and has the collective experience of all other AVs? I actually think I would. But that does not guarantee it will not kill me, does it?
I reached out to Thomas and asked him about this scenario, and he replied:
“My article is based on the scenario that we continue to buy and own cars, which we prefer to use. So it is in the light that the utilitarian programming of the cars seems to put a damper on the sale of AVs. Even though, by rational logic, it is clear that if everyone buys utilitarian cars, it will statistically be in favor of a massive majority.
“Your scenario — which indeed is more realistic — changes a lot of statistics, but we must remember that people do not seem to think rationally. At least not according to the measurement I refer. So the car owner is in the situation of having to buy a car, which (in principle — I agree that it has an element of thought experiment over it) will choose to kill him if a bunch of drunk people step out in front of the car.
“I believe that it is necessary to legislate on fixed standards for programming if people’s traffic status is to be the same. Otherwise, one can imagine that one can simply pay premium for having a car that protects the driver (a feature that might be turned off when it’s rented to someone else!)
“I totally agree with you that we are likely to run a far greater risk by not knowing the state of various public drivers we are chauffeured by every day. And besides, it’s all likely to be significantly safer for all of us (not to mention parking, traffic jams, etc.) with full AV fleets on the roads.
“I think this discussion is particularly interesting because it shows how difficult it is for humans to weigh the collective good over our individual good, even though it is clear that it is for our own benefit if everyone complies. It’s a bit like people cheating in taxes because they think the rate is too high, knowing that you could lower the tax-rate for everyone if no one cheated.”
Thomas is referring the term public good, which humans notoriously have a very hard time understanding, or at least living by.
Big auto is all in
The prototype Audi Aicon has no steering wheel. In Audi Magazine of January this year, Søren Dandanell Nielsen writes about the car and boldly states:
“When everything engaged in traffic is fully autonomous and connected, accidents will be history. Since accidents will no longer be possible, passengers in cars like the Aicon will no longer need classic security measures like airbags and seatbelts. With all vehicles operating at the same autonomous level, traffic will become calm and fluent with no aggressive braking and accelerating. As a passenger you are free to engage in anything else than the actually driving.”
And let’s not forget the bold press release from Volvo back in October 2015:
“Volvo Cars will accept full liability for the actions of its autonomous cars when in Autopilot mode, making it one of the first manufacturers to take this vital step forward in the development of self-driving cars.”
In the same press release, Volvo Cars’ commitment to speeding up the introduction of autonomous driving received support from the National Highway Traffic Safety Administration: “If a tech innovation can reduce deaths on American roads we aren’t just for it, we’re for it now,” said NHTSA administrator Mark Rosekind.
So, how will this pan out? Will we have endless debates on ethical standards for AVs and have them implemented through the traditional democratic systems? Or will we not care one bit, and just let the big companies do whatever they do, and be pleased that fewer people are killed in traffic, even if it means you are safer walking drunk in the streets than you are relaxing in your AV?