One of the topics that seems to come up continually in discussions about autonomous driving technologies is potential vulnerability to hacking and malicious intent. How susceptible will autonomous vehicles be to “autonomous hijacking,” or to snoopers and stalkers for that matter? Can the sensors of such autonomous vehicles be easily spoofed to cause accidents?
With those and related questions in mind, researchers at the Chinese security firm Qihoo 360, the University of South Carolina, and Zhejiang University in China, recently went to work on Tesla’s Autopilot system to see what they could do….
The findings? That there are potential faults that would allow “highly motivated people to cause personal damage or property damage.”
It wouldn’t be a particularly cheap and/or practical option to do so, though — in some cases relying on $90,000 signal generators, or in others, the use of a full suite of technologies (function generator, ultrasonic transducer, and Arduino computer).
For some background here, the current Autopilot-enabled Tesla vehicles out there rely on cameras, radar, and ultrasonic sensors, to provide the inputs used by the driving software.
Autoblog provides some details, stating that, “to start, (USC Professor Wenyuan) Xu’s team stuck a $90,000 signal generator on a cart in front of a stationary Model S, to simulate following another vehicle. After switching the rig on, the ‘car’ vanished from the Tesla’s sensors without warning. Obviously, $90,000 jammers are neither easy to acquire or very portable.” Indeed.
Continuing: “But where jamming the radar is a pricey proposition, the ultra-sonic sensors are a far easier target, albeit at lower speeds. The ultra-sonic sensors manage the Model S’ self-parking and summon features, but using equipment that costs as little as $40 — a function generator, an Arduino computer, and an ultra-sonic transducer — Xu’s team manipulated the sensors to either see an object that wasn’t there or cloak one that was.
“The Tesla’s camera systems were the most resilient to attacks — the researchers shined lasers and LEDs at the cameras to try and blind them. Xu’s team managed to kill a few pixels on the camera sensors, but AutoPilot simply shut down and warned its driver to take the wheel when they tried to jam the camera.”
Commenting on the research and cautioning against overreactions to the findings, Xu stated (to Wired): “I don’t want to send out a signal that the sky is falling, or that you shouldn’t use Autopilot. These attacks actually require some skills. (Tesla) need to think about adding detection mechanisms as well. If the noise is extremely high, or there’s something abnormal, the radar should warn the central data processing system and say ‘I’m not sure I’m working properly.’ “
Tesla issued a reply when asked about the work, stating: “We have reviewed these results with Wenyuan’s team and have thus far not been able to reproduce any real-world cases that pose risk to Tesla drivers.”
The researchers behind the work will be presenting it at the upcoming DEFCON hacker conference.
Image: Screenshot of Tesla Model X & Autopilot test drive review.