Tesla is well known for its plans to create a fleet of robotaxis and is actively working on that with the development of its Full-Self Driving (FSD) suite. The company and it fans are probably the most vocal about this potentially amazing, life-saving technology. There are other companies developing similar technologies with a focus on fully autonomous driving as well, though, and this includes tech giant NVIDIA.
NVIDIA just announced its own version of a computer architecture, sensor set, and full-self driving software (in some scenarios), DRIVE Hyperian 8. The company says that DRIVE Hyperian 8 is designed for the highest levels of functional safety and cybersecurity and is available to purchase today for 2024 vehicle models.
NVIDIA noted that the production platform was designed to be modular, which enables customers to use only what they need. The new software was announced at the GPU Technology Conference (GTC) where NVIDIA CEO Jensen Huang showed a DRIVE Hyperion 8 vehicle driving autonomously from the NVIDIA headquarters in Canta Clara, CA, to Rivermark Plaza in San Jose, CA.
Nvidia added that DRIVE Hyperion is scalable and that its artificial intelligence provides a secure base for autonomous vehicle development.
NVIDIA’s DRIVE Orin Platform
In April 2021, NVIDIA announced its new NVIDIA DRIVE Orin high-performance AI platform to power autonomous vehicles. At the time, the company noted that it processes over 250 trillion operations per second (250 TFLOPs) while achieving systematic safety standards such as ISO 26262 ASIL-D.
The company noted that along with the Orin platform, DRIVE Hyperion will provide a secure base for the development of autonomous vehicles. It will have two Nvidia DRIVE Orin systems-on-a-chip that will provide redundancy and fail-over safety along with the needed computing power for level 4 self-driving. The company added that the “DRIVE Hyperion 8 developer kit also includes NVIDIA Ampere architecture GPUs. This high-performance compute delivers ample headroom for developers to test and validate new software capabilities.”
The Sensor Suite
The sensor suite is made up of 12 cameras, 9 radars, 12 ultrasonics, and 1 front-facing lidar sensor. The sensor suite is also coupled with sensor abstraction tools so that autonomous vehicle manufacturers can customize the platform to their individual needs.
Included in this suite is the long-range Luminar Iris sensor that will perform front-facing lidar capabilities. “The long-range Luminar Iris sensor will perform front-facing lidar capabilities, using a custom architecture to meet the most stringent performance, safety and automotive-grade requirements.” Luminar’s Founder and CEO, Austin Russell, touched upon the common thread between Luminar and NVIDIA.
“NVIDIA has led the modern compute revolution, and the industry sees them as doing the same with autonomous driving,” he said.
“The common thread between our two companies is that our technologies are becoming the de facto solution for major automakers to enable next-generation safety and autonomy. By taking advantage of our respective strengths, automakers have access to the most advanced autonomous vehicle development platform.”
The radar is made up of Hella short-range and Continental long-range and imaging radars. Sony and Valeo cameras provide the visual sensing and Valeo is adding ultrasonic sensors to help the system measure object distance. Sony’s Head of Automotive Marketing, Marius Evensen said: “NVIDIA DRIVE Hyperion is a complete platform for developing autonomous vehicles, which is why Sony has integrated its sensors to give our customers the most effective transition from prototype to production vehicles.”
Comparison With Tesla
Big differences between NVIDIA’s sensor suite and Tesla’s FSD hardware is that Tesla no longer uses radar and has never used lidar, focusing 100% on the use vision (cameras) instead of radar and lidar. At Tesla’s AI Day back in August, Tesla showed how it is teaching neural nets to drive just based on the camera inputs. Using only vision simplifies the system more deeply than just on the hardware side.
Sensors are a bitstream and cameras have several orders of magnitude more bits/sec than radar (or lidar).
Radar must meaningfully increase signal/noise of bitstream to be worth complexity of integrating it.
As vision processing gets better, it just leaves radar far behind.
— Elon Musk (@elonmusk) April 10, 2021
While Tesla is working on its own AI and FSD software, and making the electric vehicles that use them, the automotive competition wants to catch up, and many of them plan to do so with the support of NVIDIA. Will its solution work as well? Will it work better? Or will it be too limited by the kinds of challenges that led Tesla to go 100% vision/camera-based for its self-driving tech?
It’s an exciting future to think about in regards to autonomous driving. The National Highway Transportation Safety Administration (NHTSA) stated that an estimated 20,160 people died in motor vehicle crashes in the first half of 2021 in the USA, and rightfully called this a crisis. We need self-driving technology yesterday.