Published on July 29th, 2016 | by Michael Barnard0
Tesla & Google Disagree About LIDAR — Which Is Right?
July 29th, 2016 by Michael Barnard
Tesla rather famously has chosen not to use LIDAR as one of the sensors on its cars to assist with its autonomous features, at least so far. Google uses LIDAR as one of its dominant sensors and insists it’s necessary. With the recent fatality in a Tesla that was operating under Autopilot, Tesla’s choice is under attack. Assessing the competing claims requires understanding the strengths, weaknesses, and compromises inherent in the different sensor types.
There are 4 types of sensors which provide external and immediate information to autonomous and semi-autonomous vehicles:
- LIDAR — a surveying technology that measures distance by illuminating a target with a laser light. LIDAR is an acronym of Light Detection And Ranging, (sometimes Light Imaging, Detection, And Ranging) and was originally created as a portmanteau of “light” and “radar.”
- Radar — an object-detection system that uses radio waves to determine the range, angle, or velocity of objects.
- Ultrasonic — an object detection system which emits ultrasonic sound waves and detects their return to determine distance.
- Passive Visual — the use of passive cameras and sophisticated object detection algorithms to understand what is visible from the cameras.
Each technology has different strengths and weaknesses.
LIDAR systems are currently large and expensive systems which must be mounted outside of vehicles. The system Google uses is in the range of 80 kg and $70,000, for example, and must be mounted on top of the vehicle with unobstructed sight lines. Due to their current limitations, the systems are not useful for detecting anything near the car. Current implementations have improved range substantially from early 30 meter ranges up to 150 to 200 meter ranges, with increases in resolution as well. At present, production systems with higher range and resolution continue to be expensive. LIDAR works well in all light conditions, but starts failing with increases in snow, fog, rain, and dust particles in the air due to its use of light spectrum wavelengths. LIDAR cannot detect colour or contrast, and cannot provide optical character recognition capabilities. Narrow-beam LIDAR has been used for 20 years, but current-generation LIDAR used on autonomous cars is less effective for real-time speed monitoring.
Representative manufacturers include Continental AG, LeddarTech, Quanergy, and Velodyne.
Google’s self-driving car solution uses LIDAR as the primary sensor, but uses the rest of the sensors as well. Tesla’s current solution does not incorporate LIDAR (although, its sister firm SpaceX does) and past and current statements indicate that Elon and team do not believe it is necessary for autonomous cars.
Quanergy is demonstrating a near-production solid-state LIDAR system which is expected to have 150 meters of range, a $250 cost, and adequate resolution. The unit is still much larger than all other sensors and much more expensive, but the price and size compared to expected performance will allow it to be a very competitive sensor if it makes it to production. This price/performance point might make its inclusion on Teslas more likely, and a Tesla Model S with a conventional lidar mounted on top has been spotted.
Solid-state radar-on-a-chip systems are common, small, and inexpensive. They have good range, but poorer resolution than other sensors. They work equally well in light and dark conditions, and 77 GHz systems are able to better sense through fog, rain, and snow, which causes LIDAR and passive visual systems challenges. Like LIDAR, no colour, contrast, or optical character recognition is possible with radar. Radar is very effective at determining relative speed of traffic in current implementations. While sensor size makes these better for near-proximity detection, they are less effective than sonar at very short distances.
Representative manufacturers include Delphi, Kyocera, Valeo, and Visteon.
Ultrasonic sensors actively emit high-frequency sound above the level of human hearing. They have very poor range, but are excellent for very-near-range three-dimensional mapping, as sound waves are comparatively slow, so differences in a centimetre or less are detectable. They work regardless of light levels and, due to the short distances, work equally well in conditions of snow, fog, and rain. Like LIDAR and radar, they do not provide any colour, contrast or optical character recognition capabilities. Due to their short range, they aren’t useful for gauging speed. They are small and cheap.
Representative manufacturers include Bosch, Valeo, Murata and SensorTec.
Camera image recognition systems have become very cheap, small, and high-resolution in recent years. They are less useful for very close proximity assessment than they are for further distances. Their colour, contrast, and optical-character recognition capabilities give them a full new capability set entirely missing from all other sensors. They have the best range of any sensor but that’s in good light conditions. Their range and performance degrades as light levels dim, starting to depend — as human eyes do — on the light from headlights of the car. In very bright conditions, it is apparently possible for some implementations to not identify light objects against bright skies, which was reportedly a factor in the May 2016 Tesla Autopilot-related fatality in Florida. Digital signal processing makes it possible to determine speed, but not at the level of accuracy of radar or LIDAR systems.
Representative manufacturers include Mobileye, Delphi, Honeywell, and Toshiba.
Most recently, Mobileye has announced that it will no longer be supplying Tesla with its solution due to disagreements about use, and will focus on fully autonomous solutions. Mobileye’s current implementation has acknowledged limitations of resolution and side-collision detection, the second of which is expected to be included in an upcoming new product release.
Performance Variance in Different Conditions
It’s worth looking at the significant variance in performance of sensors under different conditions. The charts below (which I created) provide rough approximations of range and acuity of the sensor based on rough averages across various technical implementations of different sensor types.
Range is in meters. Acuity is an asserted value based on a combination of resolution, contrast detection, and colour detection.
Obviously, passive visual has the longest range and best acuity when conditions allow it to be used, and equally obviously, it degrades rapidly in terms of the quality of information it can provide under adverse conditions.
Range in the dark is based on modern headlights illuminating the path forward, but headlights illuminate much less width and both contrast and colour suffer. The values chosen assume the car is not on a well-lit road, but on a road without any significant streetlights and no lunar illumination. Under a full moon without cloud cover and/or with lots of roadside lights, visual sensors can gather more information, but shadowing can make identification challenging as well.
What becomes apparent from this assessment is that radar, while not the best sensor under all conditions, degrades the least at ranges necessary to detect vehicles and other objects at higher speeds. LIDAR is better until significant atmospheric murkiness occurs with fog, snow, or heavy rain, but degrades under those conditions.
This assessment triggers the question of how much reliance should be placed on what quality of information for autonomous cars. Is the lower acuity of radar sufficient for identification of the majority of objects and vehicles, allowing the car to proceed faster with safety than if a human were driving? Or is the acuity too low and speed would have to be dropped to allow the other sensors to gather enough information to respond in time?
And the other question is whether autonomy should be designed with the most reliable information available under all conditions, radar, or whether it should be designed with assumptions of sensor sets which degrade so substantially under more adverse conditions?
LIDAR, radar, camera, and headlight technologies are all proceeding, so this assessment is at a point in time in mid-2016 and may change substantially in the coming years.
A Less-Expensive, Full-Featured Compromise
Tesla assessed these factors, and presumably more, and arrived at the decision that LIDAR was not required for an effective full-sensor set. Depending on implementation of its sensors, the above overlapping chart shows that this is sensible. The current set of sensors on a Tesla likely has a hardware cost in the same range as a single next-generation, not-yet-production, solid-state LIDAR sensor, but can provide excellent capabilities in most conditions.
What it would appear to lack is good resolution imaging in the dark, where LIDAR has an edge over lower-resolution radar.
Regardless of whether LIDAR is required or not, no solution based on a single-sensor set or even a dual-sensor set is likely to be viable. Each sensor type has strengths and weaknesses and amalgamating a single representation of reality from multiple sensors is required in order to avoid false positives and also false negatives.
Some early statements from Tesla seemed to indicate that one of its sensors recognized the truck side in Florida, but another didn’t or thought it was a suspended sign. The system resolved the conflict in favour of avoiding a false positive, and Tesla is expected to release a software update shortly which better uses the radar system to avoid this edge condition.
No single solution is perfect. Every combination solution has compromises, even if those compromises are of size or differences in degree of awareness in different directions. The sets of sensor technologies will be combined in different ways at different price points of vehicles for different solutions to arrive at effective solutions.
For now, it appears as if Tesla is correct. With the advent of solid-state LIDAR at steadily decreasing price points, and with better performance characteristics, that may change soon. [Editor’s Note: I assume developers of any breakthrough LIDAR system would be extremely eager to have Tesla as a client and would be pushing for trial use of their tech, which I imagine Tesla would happy to implement. In other words, as soon as such potentially breakthrough LIDAR is ready for production, I assume Tesla will be one of the quickest (if not the quickest) to be offered it and to actually implement it in production vehicles.]