Latest Tesla Crash Reignites Autopilot/Emergency Braking Controversy

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

On Friday, May 11, a Tesla Model S crashed into the back of a fire department vehicle at a red light. The crash occurred in South Jordan, Utah, a suburb of Salt Lake City. The driver of the Tesla suffered a broken ankle. He was not driving while impaired according to police. The driver of the truck was not injured. The Model S was destroyed.

Tesla Model S crash autopilot
Credit: South Jordan PD

At this point, that is pretty much all that is known about this incident. Sergeant Samuel Winkler of the South Jordan Police Department issued a statement to the press on Saturday saying there was a light rain falling at the time of the accident and the road was wet. “Witnesses indicated the Tesla Model S did not brake prior to impact,” the statement said. According to a report in the Washington Post, the Tesla was traveling at 60 mph at the time of the crash.

Winkler said no further information would be available until Monday. In the meantime, the big question on everyone’s mind is, was Autopilot engaged at the time of the collision? The police don’t know and a spokesperson for Tesla said on Sunday, “Tesla has not yet received any data from the car and thus does not know the facts of what occurred, including whether Autopilot was engaged,” according to an Associated Press report.

Tesla may be circumspect about releasing information on the crash, after getting into a very public spat with federal investigators following the deadly crash of a Model X in California in March. The NTSB admonished Tesla for saying too much about the investigation while Tesla essentially claimed that it was only trying to show that its Autopilot system was not at fault and driver error was responsible.

This latest collision has ignited a debate here at CleanTechnica between Tesla owners (Zachary and Kyle) and non-Tesla owners (me) about the whole Autopilot/Forward Emergency Braking subject. In essence, the debate breaks down like this:

On one side, Tesla advocates ask, “How can a responsible driver not see a red light and a very large vehicle painted fire engine red?” The other side asks, “How can a car with the most sophisticated electronic safety system ever created by human intelligence fail to see a red light and a very large vehicle painted fire engine red?”

First, we need to distinguish between Autopilot and Forward Emergency Braking. In the world of computer engineers, the two are separate and distinct and one has nothing to do with the other. But are ordinary drivers sophisticated enough to appreciate the distinction? Don’t they just assume a car equipped with something called Autopilot and Forward Emergency Braking knows not to go around bumping into things?

Which leads to this question: If in fact Autopilot and Forward Emergency Braking don’t know that, should they be allowed in a production car at all? Don’t they lull drivers into a false sense of security? Tesla, of course, is not responsible for ads by other car companies, but TV programs today feature lots of commercials that show new cars braking automatically to avoid a car or a child who suddenly appears in the road ahead. Often, the drivers in those commercials are distracted.

Aren’t we getting mixed messages here? On the one hand, we are being told to always drive like it’s our first time behind the wheel and our parents are riding along, observing our every move. On the other hand, we are being told to relax, modern technology will keep anything bad from happening to us.

Tesla specifically informs new owners during the delivery process about the need to be vigilant at all times while driving. There are warnings in the owner’s manual. (How many people actually read those things?) Other warnings pop up on the instrument panel when Autopilot is engaged. If people ignore those warnings, how is that Tesla’s fault?

Still, people tend to err on the side of complacency. And the more familiar we are with electronic systems, the more complacent we get. Elon Musk said as much during the Q1 earnings call earlier this month. “When there is a serious accident, it is almost always, in fact, maybe always the case, that it is an experienced user,” he said. “And the issue is … more one of complacency, like we get too used to it.”

Soon, we will get an update on whether or not Autopilot was engaged at the time that Model S in Utah rammed into a fire truck. Either it was or it wasn’t, but does it really make any difference? If it was and it failed to detect a red light, what good is it? If it wasn’t but the Forward Emergency Braking feature was not able to identify a truck stopped in the road ahead, what good is it?

The issue appears to be the “false positive” conundrum. Manufacturers can’t have their cars coming to a screeching halt every time a piece of paper or a plastic bag flutters across the road, so they program their safety system to simply ignore certain types of data. But there is a significant difference between a stray piece of paper or a plastic bag and a 10 ton truck painted bright red. Surely, with all the hundreds of software engineers Tesla has on staff, they should be able to write algorithms that can distinguish an elephant from a blueberry, shouldn’t they?

The Tesla proponents point out that we only ever hear about the accidents that weren’t avoided. We never hear about those that never happened. That’s a valid point. And now that Tesla has said it will release Autopilot crash data quarterly, perhaps there will be some empirical data that proves Elon’s oft repeated claim the these systems work as intended to prevent injuries and death.

The Wall Street Journal reports today that Tesla once considered adding a system to monitor the eyes of the driver to detect moments of inattention. For now, that idea has been tabled because of cost considerations and because it was uncertain the system could work properly with drivers of differing heights. Cadillac employs eye tracking with its new Super Cruise system but Audi says it is delaying the introduction of a similar system until the regulatory environment clarifies in Germany and the US.

It is precisely because what happens with Tesla automobiles impacts the regulatory process that the company must find a way to prevent its cars from running into things like fire trucks and highway barriers, lest regulators react by placing too many restrictions on self-driving systems that have the potential to save so many lives. If they can’t do that, perhaps they should disable those features until they can?


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Steve Hanley

Steve writes about the interface between technology and sustainability from his home in Florida or anywhere else The Force may lead him. He is proud to be "woke" and doesn't really give a damn why the glass broke. He believes passionately in what Socrates said 3000 years ago: "The secret to change is to focus all of your energy not on fighting the old but on building the new." You can follow him on Substack and LinkedIn but not on Fakebook or any social media platforms controlled by narcissistic yahoos.

Steve Hanley has 5453 posts and counting. See all posts by Steve Hanley