Published on June 29th, 2020 | by Zachary Shahan0
10 Bonus Autonomous Driving News Stories
June 29th, 2020 by Zachary Shahan
Continuing our series of end-of-month news roundups for stories we couldn’t dedicate full pieces to but still seemed noteworthy, here are 10 autonomous driving stories worth a gander.
1. Venti Technologies, a Chinese startup which claims to be “the leader in safe-speed autonomous vehicles,” announced this month that it has put two autonomous SAIC-GM-Wuling Automobile SUVs on the road at a school at Nanning City, the capital of Guangxi Province, China.
“The SUVs, which operate at a maximum speed of 15 km per hour, provide shuttle transportation services to students and visitors, and are easily booked via a hailing app for destinations along a 9-station loop. Venti Technologies’ flexible, algorithmic-based autonomous vehicle technology has been installed in the SUVs which run on a 3K loop in opposite directions. The Company’s sensor configuration eliminates blind spots and is able to operate with mixed traffic and other road users including, for example, cars, scooters and pedestrians. The Venti-enabled SUVs are also able to overtake lower speed vehicles while navigating incoming vehicles from the other direction.”
2. Carnegie Mellon University researchers have developed improved methods for training autonomous cars. “Generally speaking, the more road and traffic data available for training tracking systems, the better the results. And the CMU researchers have found a way to unlock a mountain of autonomous driving data for this purpose. …
“Most autonomous vehicles navigate primarily based on a sensor called a lidar, a laser device that generates 3D information about the world surrounding the car. This 3D information isn’t images, but a cloud of points. One way the vehicle makes sense of this data is by using a technique known as scene flow. This involves calculating the speed and trajectory of each 3D point. Groups of points moving together are interpreted via scene flow as vehicles, pedestrians or other moving objects.
“In the past, state-of-the-art methods for training such a system have required the use of labeled datasets — sensor data that has been annotated to track each 3D point over time. Manually labeling these datasets is laborious and expensive, so, not surprisingly, little labeled data exists. As a result, scene flow training is instead often performed with simulated data, which is less effective, and then fine-tuned with the small amount of labeled real-world data that exists.
“Mittal, Held and robotics Ph.D. student Brian Okorn took a different approach, using unlabeled data to perform scene flow training. Because unlabeled data is relatively easy to generate by mounting a lidar on a car and driving around, there’s no shortage of it.
“The key to their approach was to develop a way for the system to detect its own errors in scene flow. At each instant, the system tries to predict where each 3D point is going and how fast it’s moving. In the next instant, it measures the distance between the point’s predicted location and the actual location of the point nearest that predicted location. This distance forms one type of error to be minimized.
“The system then reverses the process, starting with the predicted point location and working backward to map back to where the point originated. At this point, it measures the distance between the predicted position and the actual origination point, and the resulting distance forms the second type of error.
“The system then works to correct those errors.
“‘It turns out that to eliminate both of those errors, the system actually needs to learn to do the right thing, without ever being told what the right thing is,’ Held said.”
3. UC Riverside engineers were awarded a $1.2 million National Science Foundation grant “to develop a new generation of energy-efficient, energy-elastic, and real-time-aware GPUs suitable for use in resource-constrained environments such as emerging embedded and autonomous systems, including aerial drones and autonomous vehicles. …
“Effective GPUs for autonomous systems need to be energy efficient and able to execute workloads in real time. For example, for an autonomous vehicle to safely navigate on the road, it has to be able to process various sensor information, such as camera and Lidar, and make a decision within milliseconds to prevent the vehicle from crashing.
“However, modern embedded GPUs have several limitations when used in autonomous system settings. GPUs tend to be energy inefficient, leading to insufficient computational power and limited autonomous system capability. GPU hardware and software need to be timing aware to successfully perform real-time operations and meet the workload deadlines required to provide correct and safe operations.
“To solve these issues, the UC Riverside project will provide solutions that span both software and hardware in order to enable real-time embedded GPUs for autonomous systems.”
4. IIHS has a downer piece out postulating that self-driving vehicles may not eliminate road accidents as much as expected or hoped. “Driver mistakes play a role in virtually all crashes. That’s why automation has been held up as a potential game changer for safety. But autonomous vehicles might prevent only around a third of all crashes if automated systems drive too much like people, according to a new study from the Insurance Institute for Highway Safety.” And, after all, we’re training them to drive like humans, right?
“It’s likely that fully self-driving cars will eventually identify hazards better than people, but we found that this alone would not prevent the bulk of crashes,” says Jessica Cicchino, IIHS vice president for research and a coauthor of the study.
“Conventional thinking has it that self-driving vehicles could one day make crashes a thing of the past. The reality is not that simple. According to a national survey of police-reported crashes, driver error is the final failure in the chain of events leading to more than 9 out of 10 crashes.”
5. Guidehouse Insights (formerly Navigant Research, and before that Pike Research) has a report out with the forecast that “significant deployments for more than a few hundred thousand light-duty, highly automated vehicles are not expected until after 2025, with global volumes expected to reach approximately 13.1 million in 2030.
“Although 2020 was long projected as the turning point for when automated vehicles would start being widely deployed and adopted, the reality is that AD technology has proven far more challenging to develop and validate than anticipated, the market researcher said.”
6. Carnegie Mellon University researchers (again) made another step forward in the field (perhaps) by showing “that they can significantly improve detection accuracy by helping the vehicle also recognize what it doesn’t see.” It seems like that’s, well, sort of obvious. And you’d think it was incorporated into self-driving system. But apparently that’s not necessarily the case. “The very fact that objects in your sight may obscure your view of things that lie further ahead is blindingly obvious to people. But Peiyun Hu, a Ph.D. student in CMU’s Robotics Institute, said that’s not how self-driving cars typically reason about objects around them.”
7. South China Morning Post has an article out highlighting that “5G is considered an essential element in China’s autonomous driving road map.” This is not a new claim, but the article piles up some industry context to explain it. “Industry studies have concluded that 5G can reduce the high cost of on-board equipment by shifting some computing power off vehicle,” is one key component of the argument.
8. Audi has opened up an Automated Driving Development R&D office in Silicon Valley. The office is “focused on creating Advanced Driver Assistance Systems (ADAS) technologies specifically for North American market.” The office plans to hire 60 people.
“To start, A2D2 has outfitted several Audi Q7 development vehicles with roof-mounted sensor kits to collect data and help software engineers develop tools that power tomorrow’s vehicles and enhance the driving experience in an Audi. The A2D2 development vehicles are wrapped in a QR code that links to a webpage with the latest Audi automated driving breakthroughs and developments. Designers working in the Audi Design Loft in Malibu, California, created the unique graphics and logo specifically for A2D2.
“The fleet of testing vehicles will be used for data acquisition to develop various cloud-based automated driver-assistance functions planned for introduction by 2023.”
9. The US Department of Transportation announced initial participants in a new DOE initiative aimed at improving the safety and testing transparency of automated driving systems. This is called the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) Initiative.
The participating companies are Beep, Cruise, Fiat Chrysler Automobiles, Local Motors, Navya, Nuro, Toyota, Uber, and Waymo. The States are California, Florida, Maryland, Michigan, Ohio, Pennsylvania, Texas, and Utah.
The AV TEST Initiative will include a series of public events across the country to improve transparency and safety in the development and testing of automated driving systems.
10. Auburn University is building a whole new autonomous driving research facility. “The facility is expected to provide a garage with multiple bays and lifts for commercial trucks and passenger vehicles, office space for researchers, a conference room and an observation area overlooking NCAT’s 1.7-mile oval test track.
“The building, estimated to cost approximately $800,000, will be one of the few autonomous research facilities in the nation attached to a test track.”
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.