Rather than reading this article, I do highly recommend watching the video version instead. However, for those who do not like that medium, we have an article version of the video below.
Not Autonomy Day 2.0
So, in regards to Elon Musk, I have often asked myself, “Elon, you have the machine that builds the machine, so where is the robot that builds the robot?” In this article, I will detail what I think Tesla is most likely going to announce on AI Day (today) and it all has to do with Elon’s numerous hints about solving real-world AI.
And just to clarify, real-world AI means an AI capable of capturing, processing, and interacting with the real world, which could be as confined as a warehouse or as unlimited as all the roads & other transportation infrastructure around the world. All of the billions of unique homes with different furniture and layouts is also a tough one.
Elon has a lot of goals with life, and many of those goals are actually roads that lead to Mars. Yet there is something that has always been missing, and that is robots. None of his companies make robots. I don’t mean assembly line machines — actual robots. The stuff you will need for asteroid mining, orbital construction, and creating civilization on Mars.
The closest thing they have is Autopilot. The reason why there are no robots is not because of physical limitations of the robots. Sure, Elon has said on occasion that robots are not very handy with things like cables, or stuffing foam & rubber into small spaces, but the real limitation is actually that there is no real-world AI training program out there. That is where Tesla AI day comes into play, and it’s not going to be Autonomy Day 2.0.
To answer the most obvious question first, if AI is not about autopilot, then what is it about? Tesla AI day is most likely going to be about a software package, an AI for developing real-world AI, if you will. Now to start us off, let me show you a short clip from the Google I/O conference in 2017
Problem 1/2 — Collecting Data for an AI
I left that joke at the end there on purpose, because what Tesla does is they go a lot deeper. Here’s how. Half the trouble with AI is getting data to feed it, lots of data. We are talking thousands if not billions of, let’s just say, examples. Cat pictures? Easy, the internet is full of them. MRI scans? Harder. 360 degree video (rather than images) of a car driving in every situation physically possible on Earth? Insanely hard.
Then, even if you do have access to the data, it also needs to be labeled. Tesla has for the most part been able to automate the process of data collection and labeling. Here is the first part of a clip from a presentation given by Tesla’s Andrej Karpathy:
I am sure that Tesla has become the best in the world at automatically labeling 4D footage, and this is something it could actually sell. But for new things, for edge cases, or when the AI’s confidence is low, you still need a lot of people labeling data. At Tesla, as of last fall, the number of labelers exceeds 500, and they actually wanted to double that number to 1,000. That is something that Elon told us directly in a conversation he had with our CEO, Zachary Shahan. Don’t let the number of labelers confuse you here — even within the scope of real-world AI, navigating jam-packed infrastructure with all the variations around the world is probably the hardest problem you could solve. The more edge cases, the more labelers. Simpler problems that other companies need resolved, potentially using Tesla’s AI-building AI, will need a lot fewer labelers.
Problem 2/2 — Training the AI
The other half of the problem with AI is the actual training. The AI needs to be improved. Doing that manually is not going to work. The AI needs to be taught how to learn and improve itself and do so automatically. To show to you that this is indeed something Tesla has been able to accomplish, here is part 2 of the clip from Andrej Karpathy’s presentation mentioned above:
If you paid close attention, Tesla is trying to expand the job of labelers to enable them to actually train the AI as well, so that, as the CEO of Google put it, the machine learning PhDs can focus on other tasks.
What The Core Autopilot Team Can Focus On
You might wonder then, if labelers do all this extra work (which is phenomenal), what do the machine learning PhD experts work on? The most obvious thing is improving the learning system so that it becomes better at learning and faster at learning, basically using less computer resources to cut down on the training time and also to be able to learn from a smaller data set. Getting 10,000 images of a stop sign is relatively easy. Getting 10,000 examples of an “edge case” would disqualify it from being an edge case. Learning from a smaller sample group is crucial.
However, Tesla can work with even a single edge case example. Some of these people will be working on turning those single examples into simulations that the car can learn from. Elon said, I think more than once, that the benefit of a fleet vs. extensive simulation is that you find those weird things you would never have even thought to simulate.
Some experts will be working on improving image/object recognition that labelers rely on to get data. At the same time, this results in better auto-labeling and lowers the workload of the labelers.
Then a big effort is still underway at Tesla to change the code, so that all of the camera inputs are stitched into 360 video and all the neural nets can get the information they need from that 360 video rather than the separate camera feeds all neural nets used to rely on. It’s an older system that still runs concurrently with the 360 video system, but once all the neural nets have been migrated and Tesla can eliminate the old system, that will free up a lot of space on the hardware 3 chip that can be used for more, larger, and more complex neural nets.
Side note: I think that once Tesla finishes migrating this, they will announce hardware 4, because hardware 3 has components like a larger GPU and CPU that are expensive and not really necessary once the old system has been eliminated — part of HW3 was to help the transition from HW2.
How This Beats The Competition
So, if Tesla decides to sell this AI-training AI to other companies for purposes other than driving, how is it better than anything currently on the market?
To explain what this means, let’s take a look at what is by many known as the most advanced robotics company on earth — Boston Dynamics.
Boston Dynamics has become a bit of an internet sensation thanks to its Spot and Atlas robots, which show absolutely amazing choreography and the ability to walk like actual animals. In fact, as I am writing this, they just released a video showing the Atlas robot doing parkour. What most people don’t realize is that Boston Dynamics is only solving one single problem, a very hard problem — walking. These cool dances were pre-programmed and are not as interesting from an AI standpoint. What is more interesting is that all of Boston Dynamics’ robots can be moved with a simple joystick. You tell it where to go and the AI figures out how to get there by walking. It’s not like a little remote-controlled car, since then you are actually in control. Here, you just tell it to move in certain a direction and speed, or like with a Mars rover, you pick a destination. The parkour is more impressive since the robot is learning how to traverse a complex terrain as a humanoid would.
However, if you want to put Atlas to work in a warehouse, or worse, a factory, it won’t be able to. It does not have intelligence and its situational awareness is limited to not bumping into a wall and opening a door, which is basically a feature like autopark — which, as we know, doesn’t always work. In any case, the robot’s situational awareness is not programmed to evade a forklift that is coming right at it, or to walk in the safe zone to begin with. Nor does it have the intelligence to understand how A leads to B leads to C and draw upon previous experience to predict stuff. Tesla’s AI will make it a lot easier to add the intelligence needed for it to not mindlessly pick up a box, but to reason how to pick it up.
So, yes, a lot of individual aspects can be programmed without Tesla’s AI package, but it would be a lot easier with that package, which beats out any other offering there is on the market, especially because the training requires less experts and a lot less time. Tesla could even start selling its power-efficient HW3 chip to power these new robots. The AI that makes the AI makes the AI in a way that will work well on that chip. Technically, if successful, thanks to economies of scale, this could further reduce the price of Tesla’s hardware 3 chip. Furthermore, Elon has already said that the Dojo will be offered as a service but did not go into details, perhaps because Dojo access is part of an AI-building software package.
So, to recap:
- Tesla has automated the labeling of data (well, as much as possible),
- Tesla has automated the tools needed to create and improve AI so that a much larger number of people with less training, like labelers, can do work previously only machine learning experts could do
- Tesla’s AI can create AI capable of navigating the real world with intelligence and situational awareness, cracking real-world AI as Elon has said a bunch of time,
- Tesla can offer a unique & low-power chip that can run complex AI created by Tesla’s AI.
And I just need to say this just in case, because some people have asked me this before, this system as is can never lead to general AI on its own — we are nowhere near anything like that. If anything, this is just huge progress when it comes to automating the process to create narrow AI for specific tasks related to interpreting and interacting with the real world, cracking real-world AI.
So, what kind of robotic products could potentially be made with this that were too difficult or expensive to make before?
You could create a robotic kitchen, automate traditional warehouses, automate parts of a factory that were impossible to automate before, or make the automation more robust by adding intelligence. You could automate some construction tasks, create a robotic maid for the house, and most importantly robots for space. This includes robots for things like asteroid mining, orbital construction, and a Clanking self-replicator factory, which is basically a factory that can make construction robots and all the parts needed for another Clanking self-replicator factory that said robots can construct. Elon is already a huge fan of in-situ resource utilization, or ISRU, meaning using local raw materials to create a desired product. SpaceX’s first attempt at this will be sucking in atmospheric gasses on Mars and turning them into rocket propellant, and it is very unlikely that he will stop there. Tesla AI Day could be the beginning of a new direction for all of Elon’s companies that will make the fog that is the near future clear up quite a bit.
We hope that you enjoyed this article. We will also be making an article/video after AI Day to analyze everything that was said. Elon himself complimented our analysis article of the HW3 chip that was detailed on Autonomy Day, so I hope this will hit the mark again.
Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.