Published on April 23rd, 2019 | by Frugal Moogal0
Tesla Autonomy Day: What We Learned
I’m writing this as I watch the stream, and as usual, the start of the stream is late. From having covered a number of these events, that seems like a pretty standard Tesla thing. And since I’m watching on YouTube right now and the numbers have only gone up since the time the stream was supposed to have started until now, maybe they’re on to something.
The pause gives me a bit of time for me to share before we get started that I’m a skeptic of this stuff. Don’t get me wrong, it’s not the electric car side of things that I question — not just have we gone fully electric in the past two years, with a Leaf and Model 3, but I know a bunch of other people who did so after seeing our cars.
Note that this intro section is before watching the Tesla Autonomy Day presentation. For my reactions afterward, read the Summary section.
No, it’s the autonomy stuff that I have gotten a bit more skeptical on. Tesla made a huge Autopilot update right after I got my Model 3 last year, and as an ex-programmer, I felt — and still do feel — that I have a pretty good grip on what Tesla is doing in the way of autonomous driving. From what it appears to me, Tesla is essentially breaking each part of autonomy into its own thing and then solving for that issue, and when they find edge cases, they break those out further, and solve those.
By having the system in cars with people watching them actively, Tesla then can look at the disengagements and try to determine what caused that disengagement. As they improve the system, they will get fewer and fewer disengagements, and once the disengagements are almost non-existent (disengagements due to user error would still remain), the system will theoretically be ready for prime time — once regulators agree.
So, anyway, what I’m looking for with today’s presentation is how Tesla believes it will be able to speed up both the regular deployments of new features, and how the system will move so quickly to learn. The Autopilot updates we’ve had before this point have been impressive, but they aren’t perfect.
Anyway, theoretically, the presentation will be starting soon, so here’s hoping I get some answers to my questions. As I type this, we’re 39 minutes late. That’s longer than usual for Tesla. …
… One minute later, it starts!
Why Does Autonomy Day Exist?
As the presentation starts, I’m first struck by how plain the room is. This looks less like a standard Tesla presentation and more like an investor conference, which is interesting. We get a quick explanation of why Tesla decided to do an Autonomy Day — simply, that Tesla feels that everything has been so focused on the Model 3 that the rest of the company’s story has been lost.
This is a fair point to make about the past year, as I will be the first to say that if the company is valued only on the back of the Model 3, as great as I think my car is, it’s not great enough to justify the entire company at current market valuations. Tesla is working on a bunch of other things.
Pete Bannon, the designer of the Full Self Driving chip, came to describe the system, what the timeline was for the system, and the design requirements. He went on to describe the redundancy of the system, which is important in any systems like this. Bannon then dove into the exacting way that the chip works in great detail. As someone who has worked with things like this, it’s all good information, but not anything the average person can understand in a simple way.
The biggest takeaways that I have are first that they looked at the problem to determine how to create a chip to serve the problem. Often, computer chips are made and then programmers create software to make the hardware do what they want. What they did instead was look at the problem and work to solve specifically the problem they had for it, which you can only do by designing a new chip, which is something most companies don’t have the luxury to do, as it is extremely expensive.
Tesla made the investment, and this new hardware allows the cars to increase their power by 21 times above the current chip. That’s a stunning improvement and one that could only be done by feeling that you are so certain of your solution that you can dedicate the money to solve it.
For better or worse, Tesla is extremely certain of this being its solution.
Musk actually said this near the end, that the chip they made was specifically designed for their own self-driving software to run their self-driving solution. This chip is not 21 times faster than any other chip at doing something else, it is specifically 21 times faster than any other chip at running Tesla’s self-driving solution.
Finally, we had the answer many people were really curious about — this chip is now shipping in every Tesla being made.
The Neural Network
Andrej Karpathy joins us to cover the neural network. He starts by describing how a neural network works in a nice and simple way using iguanas and examples. He then starts to explain how the data was annotated, before moving into the things that Tesla specifically looks at.
He goes on to explain the limitations of the self-driving simulations, before diving into a huge description of what sorts of “inaccuracies” they can find, and how they can use those inaccuracies to better train the system. He goes on to explain that they can then start having the system identify inaccuracies itself.
Tesla Vision uses path prediction to accurately predict how a road will extend, even when it can’t see around the corner pic.twitter.com/09qPkpqwSC
— Tesla (@Tesla) April 23, 2019
Once it does, they run the system in shadow mode, which means that the network is learning from things before it has any control, and once they feel confident in the system, they can turn it on. They mentioned that they started using learning from the neural network to have Teslas predict when a car would move into your lane, and three months ago they started to implement this into the cars, which is why the car will better sense when other cars are going to move into your lane.
At the end of this section, I don’t know that we really learned much new about the neural network, but Karpathy did a good job walking through how the neural network is working to solve autonomy. It was mentioned at the start that Karpathy had been a teacher, and that came through with the clarity of his presentation.
This 3D reconstruction shows the immense amount of depth information a Tesla can collect from just a few seconds of video from the vehicle's 8 cameras pic.twitter.com/w2x6pkM2Eb
— Tesla (@Tesla) April 23, 2019
Whoops, right after I wrote this, Musk says they expect to be feature complete this year and then feature complete plus not needing people to pay attention at all times around Q2 of next year. One thing that is interesting about this statement is that Musk is probably referring to “feature complete” the way that a software engineer does, and while I expect we’ll see headlines tomorrow stating that Musk says we’ll be self driving in a year, that probably wasn’t his intent. Software designers often refer to “feature complete” as the time that everything they want to do gets into it, but that doesn’t mean it is completely done. I wish Musk wouldn’t use that language here, but given his background, I get why he does. Moving on…
Stuart Bowers is up next to discuss how the software of the cars work. This was a really short presentation compared to the others, where he briefly touched on how the software used the hardware and the neural net to learn how to drive.
Redundancy, the Master Plan, & Robotaxis
Musk goes on to explain the redundancies that are built into the system to ensure that it can keep going if there is a single point of failure.
Musk then goes on to explain the Master Plan and where Tesla is. He anticipates that next year will be launch of the first robotaxi. He does mention that sometimes he is criticized for overly ambitious timelines, but that what he has promised comes to life. He also mentions during this time that they intend to produce both the Y and the Semi in full next year.
He goes on to start explaining the Tesla plan for robotaxis, and how the vertical integration gives them an advantage that he feels that no one can catch up with them on. The description of the Tesla Network at this point kind of goes back to what Tesla has said before, followed by a series of questions that are all over the board.
One question that stands out, Musk was asked about if they would enhance their communication about this, and he stated that following today, they would be greatly increasing their messaging. It seems like Musk wanted to share more about this, and I’m curious if it is a signal that Tesla is going to start advertising.
This presentation was long and dense, but mostly told us things about the Tesla Network that we had already heard before. We now know more about the hardware that will drive it, the system that is learning to do it, and the software that is interpreting it. We also learned that Musk is extremely optimistic about the rollout as well as the capabilities of the network.
I was disappointed not to see more of the system at work, and while I’m glad the investors got to see it, I would have loved to see a quick demonstration of some of the differences that the new system and early software could do versus what we see today.
In the end, I’m a bit more optimistic than I was three hours ago that Full Self Driving is on the horizon, and I’ll be very curious to see what more information will come to light in the next day and week.