These Are The 3 Huge Disagreements Regarding Tesla’s Autonomy Path Being Good Or Bad

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

I just spent about two hours watching YouTuber Warren Redlich and Jason Torchinsky of talk about (mostly argue about) Tesla and the path to fully autonomous vehicles. I’m quite familiar with the fundamentals of the technological debate over how to achieve fully autonomous driving, and they’ve changed very little over the years. However, it occurred to me while watching this video that few people are aware of where some of the core theoretical disagreements and information gaps lie.

Yes, there’s the whole lidar versus no lidar debate (which seems to have gone on since the beginning of time and will surely go on until the end). Yes, there’s Elon Musk’s new claim that vision is all that is needed (not lidar or radar). But I’m talking about some differences that are both broader and more nuanced regarding the current state of technology and Tesla’s plans. Let’s roll through them.

Is it safe for the driver-assist system to get better and better without being L4?

Photo by Kyle Field/CleanTechnica

The fundamental argument, which Jason Torchinsky explained and provided some references for, is two-fold: 1) as driver-assist technology gets better, the monitoring drivers have less to do; 2) as they have less to do, it becomes harder and harder for them to actually monitor for the edge cases where they have to take over. In theory, this could lead to a phase in development when the technology is “too good, but not good enough,” leading to a lot of accidents and deaths.

I’ve brought this issue up from time to time since I learned about it a few years ago and learned that NASA had tested the same challenge (but different task) on some engineers decades ago and the engineers simply could not pay attention consistently if the thing they were monitoring needed no intervention.

Some people basically think Tesla’s Autopilot system is at that stage now. (I don’t see evidence to support that theory.) Some think it will be at that stage with the Tesla FSD Beta system that just a few thousand owners have in their cars right now and the rest of us who purchased FSD and are living in the US are expected to get soon. At the moment, I feel like the Autopilot suite I have in my Tesla Model 3 is a big safety enhancer, and that it doesn’t lead to complacency. What will I think of a system that is so good that it can take me from parking spot to parking spot 99% of the time but the other 1% of the time requires me to do something to avoid an accident? I can’t say, since I don’t have such a system (yet).

It seems to me there’s still a question of how companies should proceed at these higher stages, but unless regulators are going to step in and require something specific, companies will just keep rolling out their own strategies at the pace and in the way they see fit. We saw what Waymo did in this regard. The company noticed that its safety drivers simply were not able to stay focused on monitoring the vehicles — even though they were paid and trained to do so — and the company decided to skip past the L2/L3 training-driver stage to full L4 autonomy (the vehicles drive themselves without safety drivers within the limited geography where Waymo can operate).

Is accelerated technology improvement & rollout thanks to customer beta use a net-safer approach or a net-cray-cray approach?

One other controversy regarding Tesla FSD that has been getting hot lately is whether or not it’s logical, safe, or even legal for Tesla customers to be getting a “beta” version of Tesla FSD in normal Tesla cars and acting as Tesla’a testers and trainers.

First of all, there’s some simple confusion about what “beta” means in this context. Elon Musk tweeted about it in July 2016, and that led to me writing an article about it back then. His tweet at the time read:

“Misunderstanding of what ‘beta’ means to Tesla for Autopilot: any system w less than 1B miles of real world driving.”

He also tweeted: “Use of word ‘beta’ is explicitly so that drivers don’t get comfortable. It is not beta software in the standard sense.”

More recently, yesterday, Maarten Vinkhuyzen gave an explanation of what beta testing is and why it’s critical for a new technology like FSD to advance. I think it’s a thorough explanation that gets to the root of the issue — so just read that article.

On the flip side, critics see “beta tester” customers as dangerous — whether they are useful or not for refining the system.

A big matter here, though, isn’t just about the interim. It’s about how quickly we get to L4 and L5 autonomous systems that are enormously safer than humans. If we get there in 2 years rather than 5 years, how many lives will be saved? Thousands? Tens of thousands?

This is one of the big topics Warren and Jason were debating, sometimes not so succinctly. But the theory Elon and Tesla are pursuing is clear: the quicker you increase access to the system (safely), the quicker you can improve it, and the quicker you can make the roads safer overall.

Is Tesla Autopilot a safety boon even if it isn’t L4 capable (robotaxi ready)?

This is a two-parter. First of all, there’s the question of whether Autopilot is saving lives today. Tesla puts out a quarterly “safety report” that shows fewer accidents occur (relatively speaking) with Tesla drivers versus non-Tesla drivers, and there are even fewer still when Autopilot is engaged. Below is a graph putting the Tesla-provided stats into a visualization.

Graph by Zach Shahan/CleanTechnica

There are several problems with the stats that make them essentially useless, though, as Jason Torchinsky pointed out in his debate with Warren Redlich. First of all, certain classes of cars (like the more expensive classes Tesla plays in) have far fewer accidents per mile than cheaper cars. The same goes for younger cars versus older cars, and most Tesla vehicles are extremely young.

On the plus side, though, the Tesla cars with Autopilot engaged were much less likely to get into accidents than the Tesla cars without it engaged. However, that basically doesn’t tell us much either, because Autopilot works on better roads and is used more on simpler, better drives. Also, if the driver or Autopilot finds that something is making autonomous driving too difficult, that doesn’t mean you don’t drive the route — it just means you have to drive yourself in the more dangerous conditions.

As Torchinsky pointed out, we just don’t have the data to compare cars using Autopilot to comparable cars not using Autopilot — or, more precisely, it seems no one has aggregated the data to make such comparisons.

And this may be the most fundamental disagreement right now between Tesla fans and Tesla haters. The former think Autopilot does improve safety in net (I’m in that camp), while the latter think it reduces safety — mostly from people thinking it is more capable than it is. As Warren kept pointing out in the debate with Jason, if you are in the latter camp, where’s the data showing that? From my user experience and that of many other Tesla drivers, Autopilot seems to be a big safety boost. Tesla publishes quarterly data that certainly imply it is. Perhaps a PhD student somewhere will do an appropriate analysis to conclude that is legitimately the case. Nonetheless, with each Tesla improvement (new FSD feature), the debate will start over. 😛


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video

Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Zachary Shahan

Zach is tryin' to help society help itself one word at a time. He spends most of his time here on CleanTechnica as its director, chief editor, and CEO. Zach is recognized globally as an electric vehicle, solar energy, and energy storage expert. He has presented about cleantech at conferences in India, the UAE, Ukraine, Poland, Germany, the Netherlands, the USA, Canada, and Curaçao. Zach has long-term investments in Tesla [TSLA], NIO [NIO], Xpeng [XPEV], Ford [F], ChargePoint [CHPT], Amazon [AMZN], Piedmont Lithium [PLL], Lithium Americas [LAC], Albemarle Corporation [ALB], Nouveau Monde Graphite [NMGRF], Talon Metals [TLOFF], Arclight Clean Transition Corp [ACTC], and Starbucks [SBUX]. But he does not offer (explicitly or implicitly) investment advice of any sort.

Zachary Shahan has 7381 posts and counting. See all posts by Zachary Shahan