Editor’s note: Paul Fosse is the kind gentleman who eagerly drove his brand new Tesla Model 3 from Tampa to Sarasota, Florida, to let me test drive it. I later found out he was a software engineer with three decades of experience and today realized he could provide great context and insight into software and hardware matters related to Tesla, Waymo, and others. His debut article is below, covering the insane Tesla Autopilot news coming out of Tesla’s quarterly conference call. Enjoy!
To kick things off, Tesla CEO Elon Musk gave an overview of Tesla’s recent and not so recent advancements in Autopilot. The soft-spoken, technical, and brief statements could potentially be glossed over as a simple update to Tesla’s Autopilot suite — some perennial Tesla critics may even want to call it spin. However, those of us who follow this kind of stuff had our eyes nearly popping out of our skulls.
Here’s a bullet-point overview of Elon’s core statements:
¤ New hardware is a plug-and-play option that can be put into existing cars.
¤ It has 10 times faster performance than what was used before.
¤ Tesla has been working for 3 years on this.
¤ Tesla has been using this in stealth operation going up to today.
“I think we’re making pretty radical advances in the core software technology and the division beyond that,” he summarized.
Stuart Bowers, a fresh new face at Tesla who is now VP of Engineering, got a warm welcome from Elon and provided an update on Autopilot software. He noted that their current focus is finalizing version 9 (v9), which will roll out in approximately a month or to early users/testers and then probably hit the crowds in September. Key features of this version of Autopilot include:
¤ It will start to work with navigation.
¤ It will start to handle exits and decide if to take them by using navigation.
¤ Handing control to and from the driver
¤ Figuring out what lane you’re in and what lane to be in.
¤ Handling on-ramps and off-ramps.
¤ Changing lanes for you.
¤ Include some new (undisclosed) safety features, enabled by better image processing.
“We’re also kind of digging in on some new safety features,” he stated. “I think probably the thing which is most exciting for me [that is] coming from the team is just seeing the foundation that’s been built out over the last two years. I think Andrej will talk a lot about some of the perception and vision work we’ve done there with the data engine. That has sort of allowed us to build on top of that very, very quickly and I think we’re all starting to see a new set of safety features that really only make sense in this world — we have extremely high understanding of what’s happening around the vehicle.”
Andrej Karpathy, Tesla’s fairly new Director of AI, joined the call as well. Andrej gave a short overview of his background and chimed in with a list of highlights from the vision team he leads.
¤ He has worked with neural networks for about 10 years, first as a PhD student at Stanford and later as a research scientist at OpenAI.
¤ The vision team is responsible for processing the video streams from the cameras in the vehicle into an understanding of what is around us.
¤ He’s particularly excited about “building out this infrastructure for computer vision that underlies all the neural network training, trying to get those networks to work extremely well, and make that a really good foundation on top of which we build out all the features of the Autopilot like the features associated with the v9 release that’s going to come up and that Stuart as mentioned.”
Pete Bannon, who is currently the head of Autopilot hardware version 3 development, was the third Autopilot leader to join the call. He was hired from Apple 2½ years ago. Notes from his segment are quite long, with them coming sometimes from him and sometimes from Elon. Here are the notes:
¤ V3 hardware chips are up and working and Tesla has “drop-in replacements” for Model S, Model X, and Model 3 — all of which have been driven in the field.
¤ It support neural nets with full frame rates and lots of idle cycles to spare.
¤ Pete’s very excited about what Andrej’s software team will be able to do with all that extra power.
¤ He gave a presentation to Andrej’s team last month explaining how it worked and what it was capable of. A very excited team member said top AI developers will want to come to work at Tesla just to get access to this hardware.
A little more on Pete’s background: He was at Digital Equipment in 1984, then was an Intel Fellow working on Itanium and led the design of the first ARM32 processor for the iPhone 5. He also led the design team for the world’s first ARM64 processor, which went into the iPhone 5s. He had been working on performance modeling and improvements for 8 years at Apple before coming to Tesla. In two years at Tesla he architected the version 3 hardware for Tesla’s cars. It will be coming to cars next year.
Here’s the stunner: It’s 10 times faster than anything else in the world at running neural nets! It has made a jump from 200 frames a second to 2,000 frames a second — “and with full redundancy and fail-over.”
Not good enough for you? Here’s more: “And it costs the same as our current hardware and we anticipate that this would have to be replaced, this replacement, which is why I made it easy to switch out the computer, and that’s all that needs to be done. If we take out one computer, plug in the next. That’s it. All the connectors are compatible and you get an order of magnitude more processing and you can run all the cameras at primary full resolution with the complex neural net. So it’s super kickass.”
They did a survey of what everyone else was doing and whether they had a CPU or a GPU to speed up neural networks, but nobody was doing a bottom-up design from scratch to run neural nets, which is what they decided they should do. [Editor’s side note: Think any other automakers are working on such improvements?]
Here’s more from Pete summarizing the improvements: “it’s a huge number of very simple computations with the memory needed to store the results of those computations right next to the circuits that are doing the matrix calculations. And the net effect is an order of magnitude improvement in the frames per second. Our current hardware, which — I’m a big fan of NVIDIA, they do great stuff, but using a GPU … fundamentally, it’s an emulation mode, and then you also get choked on the bus. So, the transfer between the GPU and the CPU ends up being one that constrains the system. …
“We’ve been in, like, semi-stealth mode basically for the last two to three years on this, but I think it’s probably time to let the cat out of the bag because the cat’s going to come out of the bag anyway.”
Tesla had the benefit of seeing what Tesla’s neural networks looked like 2–3 years ago and what their projections were for the future. They leveraged that knowledge and the ability to totally commit to that style of computing — without any concern for other types of computing (which handicaps the ability of other designers to make radical changes).
Notably, this hardware runs the net on the bare chip. It is reportedly the world’s most advanced computer designed for autonomous driving.
My Own Analysis, Opinion, & Speculation
The biggest surprise of the Tesla conference call was when Elon “let the cat out of the bag” on his stealth custom chip, which I’ll repeat was designed to process neural networks at 10 times the old hardware (which was considered state of this art until today).
Computer hardware has advanced according to Moore’s Law for about 55 years. Transistor count and performance grow at about 100% every 2 years, so when someone announces a 10 fold increase in performance in one generation, this is a real surprise!
Tesla didn’t do this with better semiconductor technology, but by using transistors that allow for more efficiently processing the type of instructions that Tesla needs to run to process the video from Tesla’s 8 high definition cameras. It needs to understand what is around the car to best decide how to drive the car. Other companies are betting on lidar to do this, but Elon strongly believes that using computer vision is the way to solve this problem. Elon’s credibility on this topic presumably got a big boost today with the announcement that they can now process 2,000 frames of video per second with the new hardware, which dwarfs the previously “state of the art” 200 frames per second that was possible with the V2 hardware that ships in all of Tesla’s cars today.
What they didn’t seem to realize — or at least didn’t acknowledge — is that this hardware that processes neural nets 10 times faster and more efficiently than anything else in the world has applications far beyond autonomous driving. This could be similar to how Amazon started selling books and other goods from a website but became so good at maintaining an infrastructure for its site that it decided to sell cloud services to others. Now, Amazon’s cloud infrastructure business may be bigger than its website retail business (I’m not sure). Similarly, Tesla could sell this chip into other industries that have nothing to do with transportation or energy, but that need to run neural networks more quickly to solve their business problems.
It seems Tesla’s valuation should go up significantly from this news, but we’ll see.
Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.