Tesla Has Strong Advantages In Race To Self-Driving Cars

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

In 2015, Tesla turned on its Autopilot suite of capabilities. At the time, it was the most capable set of semi-autonomous features that were commercially available in cars.

Two years later, the field is more crowded, with many more manufacturers, OEMs, and startups touting their offerings. Is Tesla still ahead in the race to self-driving cars, or has it fallen back into the pack?

I think it still has a much more complete and robust solution set than anyone over several factors which support one another. Other companies are a bit ahead in one component or another, but no one else has the complete set.

Subsumption Robotics Starting Point

The academic robotics world in 2000 was almost purely divided into two camps based on my reading of PhD and Masters theses from (English-language) universities around the world as I worked on robotics projects.

One was a subsumption approach, in which the robot was as stupid as an ant, but was highly survivable in the area it operated in and had sensors that nudged it toward goals and away from other things. Balancing the inputs allowed emergent behaviours to be startlingly effective.

Think about our bodies. Think about our reflexes. Think about our autonomous nervous system and purely instinctive behaviour. That’s subsumption. Our brain guides us, but our body takes care of enormous numbers of things without any attention on our part.

The second was a full-worldview approach, in which the robot had a complete understanding of the entire terrain and had enough brainpower to plot a complete course to its destination. Each action and manipulation was algorithmic in nature.

Even then, it was clear that both had merits and that a system which incorporated both would be much more robust and effective than one that depended solely on one of them.

However, there is a clear winner in terms of starting point: subsumption. Starting with a robust, high-survival-quotient machine with dumb systems that prevent most accidents and perform a lot of functions and then adding what worldview is necessary on top of it is a better path.

And that’s what Tesla did, while Google and some others failed to follow that approach.

Teslas are absurdly competent cars with lots of sensors and the equivalent of reflexes. They have very low centers of gravity, allowing good cornering and no flipping concerns; and they accelerate like cheetahs and brake very well. They can drive on roads that they have never been on and truck on down the road just fine.

This was proven rather conclusively when Autopilot and Autosteer were first released. A group of three cannonball-run types took a Tesla Model S across the USA about a month later in just under 58 hours. And the car was in control 96% of the time.

While Tesla has a quite effective worldview — more on that in a later section — it didn’t have more than a Google Map view of the road a lot of the time. The drivers ran it at 95 miles per hour quite a lot of the drive under the car’s control. The car didn’t slow enough going into corners but because of the low center of gravity and good tires, the car stayed on the road.

Google, on the other hand, started solely in the virtual-worldview space. It ran sensor-studded cars over every inch of terrain its test cars would be expected to drive on. Google built a centimeter-scale map of everything on the route. The car ran big processors to map out every turn.

Google’s non-subsumption approach is shown clearly by its little bubble car — limited to 25 miles per hour, tiny tires, etc. It was clearly not designed as a surviving object for roads but as a test bed for brainy software control systems.

The Right Sensor Set

Tesla is fairly unique among self-driving car companies in forgoing LIDAR entirely. It made this decision early and has stuck to it. It’s shifted its balance between passive camera systems and radar systems, mostly triggered by the fatal crash in May of 2016. Its set of sensors provides a complete set of capabilities at a much lower cost.

My assessment of the space led to this visual on the right (click to embiggen). Radar is in gray, passive cameras are in yellow, and sonar is in orange. The requirements are the engineering compromise space for autonomous car sensors, and they are the set Tesla chose long before I looked at it from the outside. They provide all the sensor data about surrounding conditions required for autonomous driving.

LIDAR has some serious challenges in addition to its strengths. Like radar, it sees in the dark, and it does good 3D mapping. But it’s still very expensive. The non-solid-state LIDAR Google and some others cost around $80,000 per unit. Solid-state LIDAR units of lower capability are emerging, but even they are targeted at about $2,500 per unit and are still much bigger than camera, radar, and sonar sensors. And LIDAR degrades in fog, snow, and rain, whereas radar doesn’t.

Almost every other approach to autonomy on the road has all of Tesla’s sensors and LIDAR as well. The key thing is that Tesla has figured out how to get a smaller set of much cheaper sensors to achieve the same results.

Miles Under The Tires

Tesla had complete sensor sets on all of its cars built in and after September of 2014, as well as wireless communications beaming the information into its cloud. For a year, Tesla recorded all of the behaviours of drivers of Tesla cars on roads and used that to train its autonomous driving brain.

When it turned on Autopilot and Autosteer in September of 2015, it already had more miles of sensor data than any of its competitors. A lot more miles. Orders of magnitude more miles. Traditional car companies have nothing like this because they don’t have the sensor sets on their cars, they often don’t have wireless connectivity to their cars, and they don’t have the cloud environments. For most manufacturers, the sensor suite is an expensive hardware add-on, not a built-in feature of every car.

And once Tesla turned it on in beta mode, the Silicon Valley company instantly acquired a much larger test fleet than any of its competitors (with volunteers improving performance). Google/Waymo has had a couple of dozen cars that have racked up a couple million miles over 7 years. Google is paying people $20/hour to drive its cars. Tesla has tens of thousands of volunteers.

As of April 2016, Tesla exceeded two billion miles with its sensor-laden cars driven by humans or driven by Tesla’s Autopilot features.

Image by CleanTechnica & Tesla Shuttle

These cars have driven hundreds of millions of miles under their own control. Tesla’s Autopilot and Autosteer have been used to drive these cars more than cars driven by all of the autonomous features from every other manufacturer and player combined, as far as I can tell.

Better Implementation

In part, Tesla has more miles on it because it works better than any other manufacturer’s products. Car and Driver and Motor Trend did significant back-to-back testing of offerings from Tesla and other manufacturers in 2016 and found that other manufacturers’ offerings mostly sucked compared to Tesla’s.

This test result on the right from Car and Driver shows that Tesla is as good as every other manufacturer in every category, but is far above the other manufacturers on the key metric of number of lane-control interruptions.

Motor Trend compared a different set of cars and the difference was even starker, with other offerings requiring an order of magnitude more driver intervention.

And the other offerings were much less confidence inspiring. Even test drivers didn’t want to use many of them.

Tesla has a lot more cars on the road providing data and the cars are used under Autopilot a lot more than the competitors’ cars are used under their respective semi-autonomous driving features.

Musk has stated that its Autopilot will get out of beta when the company has 10 billion miles under its wheels under car control. No other manufacturer of automobiles on the planet is on track to achieve that number in anything under decades. Tesla, especially when Model 3 production ramps up, is on track to hit that number in a handful of years.

Learning Organization

Tesla is still in startup mode after 13 years. And one thing startups need to know how to do is to pivot. After the fatality in Florida in May of 2016, Tesla pivoted based on learnings from the experience.

The company identified two major failings. The first was a known lack of capability in the Mobileye product that was being used for passive camera visual identification — that of a pale-coloured flat surface against a bright day sky. The second was that the radar was set low to identify barriers on the road and other cars, and also to not receive false positives from overhead signs.

That combination led to the Tesla not identifying a truck which was illegally crossed in front of it. The white truck wasn’t recognized as a truck by Mobileye’s components. And the radar looked underneath the truck.

To be clear, the fault was layered. The primary fault was on the part of the truck driver who turned across traffic, and he’s been cited for not yielding. The secondary fault was with the Tesla driver who was ignoring the road for an extended period of time despite multiple warnings by the car to pay attention, with his hands on the wheel for only 25 seconds of a 37 minute portion of the drive leading to the collision. Only after those two failings did the Autopilot failure come into play.

Image by CleanTechnica & Tesla Shuttle

But Tesla made two major changes. First, it ditched Mobileye. It wasn’t improving its product fast enough for Tesla, so Tesla built its own image-recognition software, and as far as I can tell has significantly exceeded Mobileye capabilities. Second, Tesla went deep on radar imaging, making that its primary sensor system instead of a secondary system. Tesla has advanced radar-processing algorithms now that overcome many of the challenges which led others to LIDAR. It also made the cars’ reactions to driver inattention more significant, to the point of the car slowing and pulling off the road if the driver doesn’t respond to prompts to return hands to the wheel, and also turning off Autosteer for the remainder of a trip if a driver doesn’t take the steering wheel quickly enough after receiving the “Hold Steering Wheel” notification.

Continuous Improvement for Drivers

And, of course, improvements in Tesla features famously come to drivers overnight, on average monthly. With a week of release, corners that Tesla vehicles took too fast initially were slowed down for, and offramps Tesla vehicles tried to drive into (mistaking them for the road) were identified correctly. These were immediate improvements for drivers that they recognized and appreciated. And let’s be clear, the drivers themselves were key to training the autonomous features to operate more effectively in tens of thousands of real-world circumstances. Tesla has built a crowdsourced, continuously improving system on top of its basic technology.

Other manufacturers don’t have over-the-air updates for the most part, or have very limited over-the-air-updates. Only Tesla has made these a deep product strategy.

Sometimes that means regression of capabilities. When Tesla shifted to its new sensor and automation suite, which it says will suffice for full autonomy in the future, the new vehicles didn’t include some features and had reduced capability compared to other, older Tesla vehicles. After a while, though, the new system was baked in and the newer vehicles gained those capabilities. The control by Autopilot was a bit harsher for a while as well, but improved over time.

Now, Tesla vehicles with the new sensor and computing suite are feature equivalent and even smoother than before.


Are other firms using machine learning? Sure. Are other firms using the sensors Tesla uses. Yes. Are other firms enlisting their drivers as systems trainers? Yup. Are other firms starting with a subsumption approach. Less so, but yes. Are other firms wiring their cars with sensors and sending data back to the cloud? Yes. Are some of the startups as able to pivot as Tesla. Yes. Do other firms do over-the-air updates? Sure. Are some of the firms’ individual features as good or better than Tesla’s implementation? Of course.

But none of Tesla’s competitors have the full solution in anywhere near the scale or the volume of data and drivers. As a result, Tesla is doing in months what other firms are taking years to achieve.


References:

Related: Geohot: Tesla Autopilot = Apple iOS, Comma.ai = Android (CleanTechnica Exclusive)


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 698 posts and counting. See all posts by Michael Barnard