If GM/Cruise Is Way Behind Waymo, How Does It Compare To Tesla?

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

First part by Zach Shahan. Second part by Mike Barnard.

A report recently came out, via leaked documents, that supposedly put GM/Cruise way behind Waymo in autonomous driving.

“In particular, the report states that the forecast is that Cruise will, by the end of 2019, have a vehicle that performs at between 5% and 11% of the safety level of average human driving, when it comes to frequency of crashes. As such, Cruise will miss its 2019 goal of deploying a commercial service, though it might deploy one with safety drivers.”

Further, “more than 3 years prior, Google/Waymo was far surpassing what the leaks say about Cruise, and the difficulty of SF streets seems not nearly enough to account for that.

“Waymo stopped reporting this way, sadly. One can only presume record has gotten better in the last 3.5 years, and that it reached a particularly high level when management — and I presume this means the board of Alphabet itself — approved even limited operation without safety drivers in Arizona.”

We’ve seen reports, including one from Navigant Research, that basically put Waymo and GM tied at the top of this industry. It looks like any such evaluation is actually far off the mark. Interestingly, that Navigant Research report also puts Tesla near the bottom. I’ve previously discussed this on two-episode podcast with ARK Invest autonomy expert Tasha Keeney. What it comes down to, seemingly, is that Tesla has a very different approach from companies like Waymo and Cruise, and Navigant (and others) have their own particular measurement systems that don’t combine the vastly different approaches very well.

As discussed on that podcast, Mike Barnard once wrote an article for us (in 2015) explaining the vastly different approaches to self-driving between Tesla and Waymo. It still comes to mind frequently, so in light of the Cruise news, I thought I’d share it again.

Mike, take it over from here …


Tesla recently released its Autopilot mode for its cars. It has a fundamentally different intellectual approach to autonomy than Google’s, and it’s superior.

One of my backgrounds is robotics. I spent a year digging my way through PhD theses from robotics programs around the world as I worked on a startup idea for specific applications of swarm-based robots. We got as far as software architecture, simple simulations, 3D modelling of physical robots, and specific applications which had fiscal value. I have some depth here without pretending to be a roboticist, and I’ve continued to pay attention to the field from the outside.

So I feel comfortable in saying that, in general, there are two approaches for robots getting from Point A to Point B.

→ The first is the world map paradigm, in which the robot or a connected system has a complete and detailed map of the world and a route is planned along that in advance accounting for obstacles. Basically, the robot has to think its way past or over every obstacle, which makes for a lot of programming.

→ The second is the subsumption architecture paradigm, in which a robot is first made so that it can survive environments it will find itself in, then equipped with mechanisms to seek goals. The robot then, without any idea of the map of the world, navigates toward Point B. The robot is robust and can stumble its way through obstacles without any thinking at all. The original Roomba vacuum cleaner was a pure subsumption beast.

Obviously, both have strengths and limitations and obviously, at least to me, a combination is the best choice, but it’s worth assessing Tesla’s vs Google’s choices based on this.

Google is starting from the full world map paradigm. For one of its cars to work, it needs an up-to-date centimetre-scale, 3D model of the entirety of the route it will take. Google’s cars are ridiculously non-robust — by design — and when confronted with something unusual will stop completely. Basically, all intelligence has to be provided by people in the lab writing better software.

Why would Google start with this enormous requirement? Well, in my opinion without having spoken to any of the principals in the decision, it’s likely because it fits their biases and blindspots. Google builds massive data sets and solves problems based on that data with intelligent algorithms. They don’t build real-world objects. And the split I highlighted above in world map vs subsumption paradigms is a very real dividing line in academics and research around robotics. It was very easy for Google and world view robotics researchers to find one another and confirm each others’ biases. Others assert that Google is taking a risk-averse approach by leaping straight to Level Four autonomy, and while I’m sure that’s a component of the decision-making process, I suspect it’s a bit of a rationalization for their biases. It’s also being proved wrong by the lack of Tesla crashes to date, but it is early days.

To be clear, Google cars can do things Teslas currently can’t, at least in the controlled prototype conditions that they are testing. They can drive from Point A to Point B in towns and regions that Google has mapped to centimetre scale, which is basically areas south of San Francisco plus a few demo areas. You can’t get in a Tesla, give it an address, and sit back. These are clear performance advantages of the Google model over current Tesla capabilities, and while not trivial, are enabled by the world map model.

Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!

Tesla, on the other hand, is starting with the subsumption model. First, the car is immensely capable of surviving on roads: great acceleration, great deceleration, great lateral turning speed and precision, great collision survivability. Then it’s made more capable of surviving. All the car needs to drive on the freeway is knowledge of the lines and the cars around it. Then it adds cameras to give it a hint about appropriate speed. It has only a handful of survivability goals: don’t hit cars in front of you, don’t let other cars hit you, stay in your lane, change lanes when requested, and it’s safe. Because of its great maneuverability — survivability — it can have suboptimal software because it is more able to get out of the way of bad situations. And it has human backup.

And if that’s where Tesla was stopping, everyone who is pooh-poohing its autonomy would be basically correct. But Tesla isn’t stopping there.

Tesla is leveraging intelligent real-world research assistants to put focused, experienced instincts into its cars. They are called the drivers of the Teslas. Every action the Autopilot makes and every intervention a driver makes is uploaded to the Tesla Cloud, where it’s combined with all of the other decisions cars and drivers are making. And every driver passing along a piece of road is automatically granted the knowledge of what the cars and drivers before them have done. In real time.

So, for example, within a couple of days of downloading, Teslas were already automatically slowing for corners that they took at speed before. And not trying to take confusingly marked offramps. And not exceeding the speed limits in places where the signs are obscured.

Within a couple of days of being available, the first people “cannonballed” across the USA in under 59 hours with 96% or so of the driving done by the car. Given Google’s requirements, they would have had to send at least two cars out, one or more with a hyper-accurate mapping functionality, then a day or a week later, when the data was integrated, the actual autonomous car. And there would have been no chance of side trips or detours for the Google car. It literally couldn’t drive on a route that wasn’t pre-mapped at centimetre scale. But the Tesla drivers could just go for it.

People are driving Teslas on back roads and city streets with Autopilot, definitely not the optimum location-only situations that others claim Tesla is limited to. And Teslas haven’t hit anything; in fact, have been recorded as avoiding accidents that the driver was unaware of. Survivability remains very high.

Tesla cars are driving themselves autonomously in a whole bunch of places where Google cars can’t and won’t be able to for years or possibly decades. That’s because Teslas don’t depend on perfect centimetre scale maps that are up-to-date in order to do anything. Subsumption wins over world maps in an enormous number of real-world situations.

Finally, Teslas have a world map. It’s called Google Maps. And Tesla is doing more accurate mapping with its sensors for more accurate driving maps. But Teslas don’t require centimetre-scale accuracy in their world map to get around. They are just fine with much coarser-grained maps which are much easier to build, store, manipulate, and layer with intelligence as needed. These simpler maps combined with subsumption will enable Teslas to drive from Point A to Point B easily. They can already drive to the parkade and return by the themselves in controlled environments; the rest is just liability and regulations.

The rapid leaps in capability of the Autopilot in just a few days after release should be giving Google serious pause. By the time its software geniuses get the Google car ready for prime time on a large subset of roads, Teslas will be able to literally drive circles around them.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 698 posts and counting. See all posts by Michael Barnard