Editor’s note: Tesla just tweeted that Tesla owners have now driven 1 billion miles on Autopilot.
In honor of the milestone (no pun intended), I’m reposting one of my favorite Autopilot articles of all time, a 2015 article by Mike Barnard. Enjoy.
Tesla recently released its Autopilot mode for its cars. It has a fundamentally different intellectual approach to autonomy than Google’s, and it’s superior.
One of my backgrounds is robotics. I spent a year digging my way through PhD theses from robotics programs around the world as I worked on a startup idea for specific applications of swarm-based robots. We got as far as software architecture, simple simulations, 3D modelling of physical robots, and specific applications which had fiscal value. I have some depth here without pretending to be a roboticist, and I’ve continued to pay attention to the field from the outside.
So I feel comfortable in saying that, in general, there are two approaches for robots getting from Point A to Point B.
→ The first is the world map paradigm, in which the robot or a connected system has a complete and detailed map of the world and a route is planned along that in advance accounting for obstacles. Basically, the robot has to think its way past or over every obstacle, which makes for a lot of programming.
→ The second is the subsumption architecture paradigm, in which a robot is first made so that it can survive environments it will find itself in, then equipped with mechanisms to seek goals. The robot then, without any idea of the map of the world, navigates toward Point B. The robot is robust and can stumble its way through obstacles without any thinking at all. The original Roomba vacuum cleaner was a pure subsumption beast.
Obviously, both have strengths and limitations and obviously, at least to me, a combination is the best choice, but it’s worth assessing Tesla’s vs Google’s choices based on this.
Google is starting from the full world map paradigm. For one of its cars to work, it needs an up-to-date centimetre-scale, 3D model of the entirety of the route it will take. Google’s cars are ridiculously non-robust — by design — and when confronted with something unusual will stop completely. Basically, all intelligence has to be provided by people in the lab writing better software.
Why would Google start with this enormous requirement? Well, in my opinion without having spoken to any of the principals in the decision, it’s likely because it fits their biases and blindspots. Google builds massive data sets and solves problems based on that data with intelligent algorithms. They don’t build real-world objects. And the split I highlighted above in world map vs subsumption paradigms is a very real dividing line in academics and research around robotics. It was very easy for Google and world view robotics researchers to find one another and confirm each others’ biases. Others assert that Google is taking a risk-averse approach by leaping straight to Level Four autonomy, and while I’m sure that’s a component of the decision-making process, I suspect it’s a bit of a rationalization for their biases. It’s also being proved wrong by the lack of Tesla crashes to date, but it is early days.
To be clear, Google cars can do things Teslas currently can’t, at least in the controlled prototype conditions that they are testing. They can drive from Point A to Point B in towns and regions that Google has mapped to centimetre scale, which is basically areas south of San Francisco plus a few demo areas. You can’t get in a Tesla, give it an address, and sit back. These are clear performance advantages of the Google model over current Tesla capabilities, and while not trivial, are enabled by the world map model.
Tesla, on the other hand, is starting with the subsumption model. First, the car is immensely capable of surviving on roads: great acceleration, great deceleration, great lateral turning speed and precision, great collision survivability. Then it’s made more capable of surviving. All the car needs to drive on the freeway is knowledge of the lines and the cars around it. Then it adds cameras to give it a hint about appropriate speed. It has only a handful of survivability goals: don’t hit cars in front of you, don’t let other cars hit you, stay in your lane, change lanes when requested, and it’s safe. Because of its great maneuverability — survivability — it can have suboptimal software because it is more able to get out of the way of bad situations. And it has human backup.
And if that’s where Tesla was stopping, everyone who is pooh-poohing its autonomy would be basically correct. But Tesla isn’t stopping there.
Tesla is leveraging intelligent real-world research assistants to put focused, experienced instincts into its cars. They are called the drivers of the Teslas. Every action the Autopilot makes and every intervention a driver makes is uploaded to the Tesla Cloud, where it’s combined with all of the other decisions cars and drivers are making. And every driver passing along a piece of road is automatically granted the knowledge of what the cars and drivers before them have done. In real time.
So, for example, within a couple of days of downloading, Teslas were already automatically slowing for corners that they took at speed before. And not trying to take confusingly marked offramps. And not exceeding the speed limits in places where the signs are obscured.
Within a couple of days of being available, the first people Cannonballed across the USA in under 59 hours with 96% or so of the driving done by the car. Given Google’s requirements, they would have had to send at least two cars out, one or more with a hyper-accurate mapping functionality, then a day or a week later, when the data was integrated, the actual autonomous car. And there would have been no chance of side trips or detours for the Google car. It literally couldn’t drive on a route that wasn’t pre-mapped at centimetre scale. But the Tesla drivers could just go for it.
People are driving Teslas on back roads and city streets with Autopilot, definitely not the optimum location-only situations that others claim Tesla is limited to. And Teslas haven’t hit anything; in fact, have been recorded as avoiding accidents that the driver was unaware of. Survivability remains very high.
Tesla cars are driving themselves autonomously in a whole bunch of places where Google cars can’t and won’t be able to for years or possibly decades. That’s because Teslas don’t depend on perfect centimetre scale maps that are up-to-date in order to do anything. Subsumption wins over world maps in an enormous number of real-world situations.
Finally, Teslas have a world map. It’s called Google Maps. And Tesla is doing more accurate mapping with its sensors for more accurate driving maps. But Teslas don’t require centimetre-scale accuracy in their world map to get around. They are just fine with much coarser-grained maps which are much easier to build, store, manipulate, and layer with intelligence as needed. These simpler maps combined with subsumption will enable Teslas to drive from Point A to Point B easily. They can already drive to the parkade and return by the themselves in controlled environments; the rest is just liability and regulations.
The rapid leaps in capability of the Autopilot in just a few days after release should be giving Google serious pause. By the time its software geniuses get the Google car ready for prime time on a large subset of roads, Teslas will be able to literally drive circles around them.