Tesla & Google Disagree About LIDAR — Which Is Right?

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Tesla rather famously has chosen not to use LIDAR as one of the sensors on its cars to assist with its autonomous features, at least so far. Google uses LIDAR as one of its dominant sensors and insists it’s necessary. With the recent fatality in a Tesla that was operating under Autopilot, Tesla’s choice is under attack. Assessing the competing claims requires understanding the strengths, weaknesses, and compromises inherent in the different sensor types.

There are 4 types of sensors which provide external and immediate information to autonomous and semi-autonomous vehicles:

  • LIDAR — a surveying technology that measures distance by illuminating a target with a laser light. LIDAR is an acronym of Light Detection And Ranging, (sometimes Light Imaging, Detection, And Ranging) and was originally created as a portmanteau of “light” and “radar.”
  • Radar — an object-detection system that uses radio waves to determine the range, angle, or velocity of objects.
  • Ultrasonic — an object detection system which emits ultrasonic sound waves and detects their return to determine distance.
  • Passive Visual — the use of passive cameras and sophisticated object detection algorithms to understand what is visible from the cameras.

Each technology has different strengths and weaknesses.


Lidar attributesLIDAR systems are currently large and expensive systems which must be mounted outside of vehicles. The system Google uses is in the range of 80 kg and $70,000, for example, and must be mounted on top of the vehicle with unobstructed sight lines. Due to their current limitations, the systems are not useful for detecting anything near the car. Current implementations have improved range substantially from early 30 meter ranges up to 150 to 200 meter ranges, with increases in resolution as well. At present, production systems with higher range and resolution continue to be expensive. LIDAR works well in all light conditions, but starts failing with increases in snow, fog, rain, and dust particles in the air due to its use of light spectrum wavelengths. LIDAR cannot detect colour or contrast, and cannot provide optical character recognition capabilities. Narrow-beam LIDAR has been used for 20 years, but current-generation LIDAR used on autonomous cars is less effective for real-time speed monitoring.

Representative manufacturers include Continental AG, LeddarTech, Quanergy, and Velodyne.

Google’s self-driving car solution uses LIDAR as the primary sensor, but uses the rest of the sensors as well. Tesla’s current solution does not incorporate LIDAR (although, its sister firm SpaceX does) and past and current statements indicate that Elon and team do not believe it is necessary for autonomous cars.

Quanergy is demonstrating a near-production solid-state LIDAR system which is expected to have 150 meters of range, a $250 cost, and adequate resolution. The unit is still much larger than all other sensors and much more expensive, but the price and size compared to expected performance will allow it to be a very competitive sensor if it makes it to production. This price/performance point might make its inclusion on Teslas more likely, and a Tesla Model S with a conventional lidar mounted on top has been spotted.


Radar attributesSolid-state radar-on-a-chip systems are common, small, and inexpensive. They have good range, but poorer resolution than other sensors. They work equally well in light and dark conditions, and 77 GHz systems are able to better sense through fog, rain, and snow, which causes LIDAR and passive visual systems challenges. Like LIDAR, no colour, contrast, or optical character recognition is possible with radar. Radar is very effective at determining relative speed of traffic in current implementations. While sensor size makes these better for near-proximity detection, they are less effective than sonar at very short distances.

Representative manufacturers include Delphi, Kyocera, Valeo, and Visteon.


Ultrasonic attributesUltrasonic sensors actively emit high-frequency sound above the level of human hearing. They have very poor range, but are excellent for very-near-range three-dimensional mapping, as sound waves are comparatively slow, so differences in a centimetre or less are detectable. They work regardless of light levels and, due to the short distances, work equally well in conditions of snow, fog, and rain. Like LIDAR and radar, they do not provide any colour, contrast or optical character recognition capabilities. Due to their short range, they aren’t useful for gauging speed. They are small and cheap.

Representative manufacturers include Bosch, Valeo, Murata and SensorTec.


Passive Visual AttributesCamera image recognition systems have become very cheap, small, and high-resolution in recent years. They are less useful for very close proximity assessment than they are for further distances. Their colour, contrast, and optical-character recognition capabilities give them a full new capability set entirely missing from all other sensors. They have the best range of any sensor but that’s in good light conditions. Their range and performance degrades as light levels dim, starting to depend — as human eyes do — on the light from headlights of the car. In very bright conditions, it is apparently possible for some implementations to not identify light objects against bright skies, which was reportedly a factor in the May 2016 Tesla Autopilot-related fatality in Florida. Digital signal processing makes it possible to determine speed, but not at the level of accuracy of radar or LIDAR systems.

Representative manufacturers include Mobileye, Delphi, Honeywell, and Toshiba.

Most recently, Mobileye has announced that it will no longer be supplying Tesla with its solution due to disagreements about use, and will focus on fully autonomous solutions. Mobileye’s current implementation has acknowledged limitations of resolution and side-collision detection, the second of which is expected to be included in an upcoming new product release.


Performance Variance in Different Conditions

It’s worth looking at the significant variance in performance of sensors under different conditions. The charts below (which I created) provide rough approximations of range and acuity of the sensor based on rough averages across various technical implementations of different sensor types.

Range is in meters. Acuity is an asserted value based on a combination of resolution, contrast detection, and colour detection.

Screen Shot 2016-07-27 at 9.15.01 AM

Obviously, passive visual has the longest range and best acuity when conditions allow it to be used, and equally obviously, it degrades rapidly in terms of the quality of information it can provide under adverse conditions.

Phare_routeRange in the dark is based on modern headlights illuminating the path forward, but headlights illuminate much less width and both contrast and colour suffer. The values chosen assume the car is not on a well-lit road, but on a road without any significant streetlights and no lunar illumination. Under a full moon without cloud cover and/or with lots of roadside lights, visual sensors can gather more information, but shadowing can make identification challenging as well.

What becomes apparent from this assessment is that radar, while not the best sensor under all conditions, degrades the least at ranges necessary to detect vehicles and other objects at higher speeds. LIDAR is better until significant atmospheric murkiness occurs with fog, snow, or heavy rain, but degrades under those conditions.

This assessment triggers the question of how much reliance should be placed on what quality of information for autonomous cars. Is the lower acuity of radar sufficient for identification of the majority of objects and vehicles, allowing the car to proceed faster with safety than if a human were driving? Or is the acuity too low and speed would have to be dropped to allow the other sensors to gather enough information to respond in time?

And the other question is whether autonomy should be designed with the most reliable information available under all conditions, radar, or whether it should be designed with assumptions of sensor sets which degrade so substantially under more adverse conditions?

LIDAR, radar, camera, and headlight technologies are all proceeding, so this assessment is at a point in time in mid-2016 and may change substantially in the coming years.


A Less-Expensive, Full-Featured Compromise

Overlap radar passive ultrasonicTesla assessed these factors, and presumably more, and arrived at the decision that LIDAR was not required for an effective full-sensor set. Depending on implementation of its sensors, the above overlapping chart shows that this is sensible. The current set of sensors on a Tesla likely has a hardware cost in the same range as a single next-generation, not-yet-production, solid-state LIDAR sensor, but can provide excellent capabilities in most conditions.

What it would appear to lack is good resolution imaging in the dark, where LIDAR has an edge over lower-resolution radar.


Regardless of whether LIDAR is required or not, no solution based on a single-sensor set or even a dual-sensor set is likely to be viable. Each sensor type has strengths and weaknesses and amalgamating a single representation of reality from multiple sensors is required in order to avoid false positives and also false negatives.

Some early statements from Tesla seemed to indicate that one of its sensors recognized the truck side in Florida, but another didn’t or thought it was a suspended sign. The system resolved the conflict in favour of avoiding a false positive, and Tesla is expected to release a software update shortly which better uses the radar system to avoid this edge condition.

No single solution is perfect. Every combination solution has compromises, even if those compromises are of size or differences in degree of awareness in different directions. The sets of sensor technologies will be combined in different ways at different price points of vehicles for different solutions to arrive at effective solutions.

For now, it appears as if Tesla is correct. With the advent of solid-state LIDAR at steadily decreasing price points, and with better performance characteristics, that may change soon. [Editor’s Note: I assume developers of any breakthrough LIDAR system would be extremely eager to have Tesla as a client and would be pushing for trial use of their tech, which I imagine Tesla would happy to implement. In other words, as soon as such potentially breakthrough LIDAR is ready for production, I assume Tesla will be one of the quickest (if not the quickest) to be offered it and to actually implement it in production vehicles.]

→ Related: Tesla Has The Right Approach To Self-Driving Cars


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 702 posts and counting. See all posts by Michael Barnard