Tesla Model X Sped Up Seconds Before Deadly Crash, NTSB Finds

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

The National Transportation Safety Board has released a preliminary report regarding the fatal crash of a Tesla Model X in Mountain View, California, earlier this year. The full investigation may take another year or more to complete and the preliminary report does little more than provide more detailed information gleaned from the car’s onboard computer.

Tesla Model X crash
Credit: NTSB

The NTSB statement begins with this cautionary language: “Information contained in the report is preliminary and subject to change during the NTSB’s ongoing investigation. Preliminary reports, by their nature, do not contain analysis and do not discuss probable cause and as such, no conclusions about the cause of the crash should be drawn from the preliminary report.” The full report is available here.

Federal investigators have recovered the following information from the damaged car:

  • The Autopilot system was engaged on four separate occasions during the 32-minute trip, including continuous operation for the last 18 minutes and 55 seconds prior to the crash.
  • In the 18 minutes and 55 seconds prior to impact, the Tesla provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. The alerts were made more than 15 minutes before the crash.
  • The driver’s hands were detected on the steering wheel for a total of 34 seconds, on three separate occasions, in the 60 seconds before impact. The vehicle did not detect the driver’s hands on the steering wheel in the six seconds before the crash.
  • The Tesla was following a lead vehicle and traveling about 65 mph, eight seconds before the crash.
  • While following a lead vehicle the Tesla began a left steering movement, seven seconds before the crash.
  • The Tesla was no longer following a lead vehicle four seconds before the crash.
  • The Tesla’s speed increased — starting three seconds before impact and continuing until the crash — from 62 to 70.8 mph.  There was no braking or evasive steering detected prior to impact.

That’s what the NTSB knows at this moment. Any conclusions drawn from that information are speculation only. The debate about the safety of Tesla’s Autopilot system is one with ardent supporters on both sides. Some safety advocates argue the name “Autopilot” amounts to deceptive marketing. Part of the problem has to be the feeling of invincibility the system confers on some drivers. Recently, GQ said Autopilot makes the driver feel like a Space God.

Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!

Without referencing the deadly Model X crash specifically, here are a few questions I have for CleanTechnica readers:

1. Should Autopilot be allowed to exceed the posted speed limit? I know most will think that is a stupid question, but hear me out. Tesla continuously and vociferously insists drivers should be attentive to the road ahead at all times, but then it gives them a system that makes them feel like space gods. Isn’t there some cognitive dissonance there? If people want to exceed the posted speed limit, that’s fine. Most of us do so on a regular basis. But if that is your choice, shouldn’t you be driving the car yourself while doing so?

2. Should Autopilot deactivate itself more aggressively if it detects the driver repeatedly taking his or her hands off the wheel? This is a simple programming tweak. Doesn’t the current programming, which allows the system to keep operating even if it detects no hands on the wheel for several seconds or even minutes in some instances, actually encourage people to do what Tesla tells them specifically not to do? Once again, isn’t there some cognitive dissonance between what Tesla says and what a Tesla does?

There are no right or wrong answers here. My argument is this: Elon Musk wants to push the envelope. That’s what he does and he is very good at it. But is that the best possible outcome for society in general? Should the driver of the Audi that crashed into the Model X have his life turned upside down because a Tesla driver wasn’t being attentive enough with Autopilot on? Should everyone be forced into being unwitting beta testers for a technology that is pretty darn good but not nearly good enough? Looking forward to your thoughts.

Editor’s note: This is clearly a sticky subject with strong opinions on both sides. Elon’s argument has long been that Autopilot improves safety in net — which means fewer deaths and fewer injuries. That’s about the story today, not just in the future. The thing is, data proving this aren’t public and haven’t been rigorously analyzed by independent sources. So, we either have to take Elon’s word for it or not. As noted previously, Tesla is going to start releasing safety-related data in connection to Autopilot on a quarterly basis. Let’s hope it is enough to genuinely analyze the issue and come up with a more informed opinion on it all. —Zach


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Steve Hanley

Steve writes about the interface between technology and sustainability from his home in Florida or anywhere else The Force may lead him. He is proud to be "woke" and doesn't really give a damn why the glass broke. He believes passionately in what Socrates said 3000 years ago: "The secret to change is to focus all of your energy not on fighting the old but on building the new." You can follow him on Substack and LinkedIn but not on Fakebook or any social media platforms controlled by narcissistic yahoos.

Steve Hanley has 5494 posts and counting. See all posts by Steve Hanley