The Institute of Electrical and Electronics Engineers (IEEE) just released a report titled Safety Implications of Variability in Autonomous Driving Assist Alerting. There is something odd about one of the authors that many in the Tesla community are trying to understand. This author serves as the Independent Director of Veoneer, which isn’t listed on her Duke Pratt School of Engineering profile page but on her Wikipedia page. Veoneer has a range of products, including radar, lidar, thermal night vision cameras, vision systems, advanced driver assistance systems, and autonomous driving software.
— Pope of Muskanity (@RationalEtienne) August 18, 2021
The issue that many are having with this particular author is what seems to be her bias against Tesla. This isn’t a call to troll her or harass her! I do think that from looking at her bio that she is highly intelligent and she cares about the safety of autonomous driving. Veoneer competes with Tesla in various ways. “Veoneer counts all major automakers as its customers,” the Wikipedia page states. All major automakers? But not Tesla, of course.
The report looked at the reliability of driver-alerting systems when linked to autonomy. Three Tesla Model 3 vehicles were included in the assessment. These tests were performed while on a highway and on a closed test track to test road departure and construction zone detection capabilities.
According to the report, there were significant variations on a number of metrics related to driver monitoring, alerting, and safe operation of the underlying autonomy. The report added that the results argued that a post-deployment regulatory process is ill-equipped to flag significant issues in vehicles with embedded artificial intelligence.
The report concluded that the results from its testing of the three Tesla Model 3 vehicles showed that the performance of the underlying AI and computer vision systems were extremely variable. The report also noted that in some cases, the cars seemed to perform the best during the most challenging driving scenarios while performing worse on seemingly simpler scenarios like detecting a road departure. This could be due to the possibility of engineers spending more effort on the more difficult problems while spending less time on seemingly easy problems.
The report stated that there is more testing needed before such technology is allowed to operate without humans in direct control. Do we really need an IEEE report to state the obvious, though? Who couldn’t have told you this?
The report stated at the end, “Waiting until after new autonomous software has been deployed to find flaws can be deadly and can be avoided by adaptable regulatory processes. The recent series of fatal Tesla crashes underscores this issue. It may be that any transportation system (or any safety-critical system) with embedded artificial intelligence should undergo a much more stringent certification process across numerous platforms and software versions before it should be released for widespread deployment. To this end, our current derivative efforts are focused on developing risk models based on such results.”
It’s unclear what “recent series of fatal Tesla crashes” they were referring to. Perhaps they are referencing the Houston area crash, but it was later clarified that Autopilot wasn’t on during the crash. There was one other crash earlier this year in which a Tesla Model 3 drove into an overturned semi truck on the 210 Freeway in California and the driver died. It has not been confirmed, but seems likely the driver had Autopilot on. Otherwise, though, it’s unclear what the authors are referencing. Also worth keeping in mind is that there’s a potential cost/death toll that comes with not rolling out ADAS as soon as the automaker thinks it’s ready.
You can find the full report here.