Driver Training For Driverless Cars

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Researchers at MIT have devised a system for training autonomous driving systems how to drive. The technology is called VISTA, which stands for Virtual Image Synthesis and Transformation for Autonomy. What it does is create a digital laboratory where self-driving algorithms can be subjected to simulated real world driving situations without ever leaving the laboratory.

MIT autonomous driving program
Image credit: MIT

According to an MIT press release, VISTA uses only a small dataset based on how human drivers actually operate their vehicles with humans driving. From that data, it synthesizes a nearly an infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The self-deriving algorithm is rewarded for the distance it travels without crashing, which requires it to learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near crashes.

Driving On Unfamiliar Roads

In tests, a self-driving computer program trained by a VISTA simulator was able to navigate unfamiliar streets. When placed in situations that mimicked various near-crash situations, it was able to steer the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters.

“It’s tough to collect data in these edge cases that humans don’t experience on the road,” says lead author Alexander Amini, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory. “In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world.”

First, the researchers collect video data from a human driver and feed that into a computer simulation that projects every pixel onto a 3D point cloud. Then, they place a virtual vehicle inside that world. When the vehicle makes a steering command, the program synthesizes a new trajectory through the point cloud based on the steering curve and the vehicle’s orientation and velocity.

Then, the program uses that new trajectory to render a photo-realistic scene. To do so, it uses a convolutional neural network — commonly used for image-processing tasks — to estimate a depth map, which contains information relating to the distance of objects from the controller’s viewpoint. It then combines the depth map with a technique that estimates the camera’s orientation within a 3D scene. That all helps pinpoint the vehicle’s location and relative distance from everything within the virtual simulator.

Based on that information, it re-orients the original pixels to recreate a 3D representation of the world from the vehicle’s new viewpoint. It also tracks the motion of the pixels to capture the movement of the cars, people, and other moving objects in the scene. “This is equivalent to providing the vehicle with an infinite number of possible trajectories,” says Daniela Rus, director of the computer science lab at MIT. “Because when we collect physical data, we get data from the specific trajectory the car will follow. But we can modify that trajectory to cover all possible ways of and environments of driving. That’s really powerful.”

Thinking Like A Computer

Instead of creating a computer program that mimics what a human driver would do, the researchers make their algorithm learn entirely from scratch under an “end-to-end” framework, meaning it begins by taking input from raw sensor data — such as visual observations of the road — and predicts steering commands as outputs. “We basically say, ‘Here’s an environment. You can do whatever you want. Just don’t crash into vehicles, and stay inside the lanes,’” Amini says.

This requires “reinforcement learning,” a trial-and-error machine learning technique that provides feedback signals whenever the car makes an error. The computer program starts off knowing nothing about how to drive, what a lane marker is, or even what other vehicles look like, so it starts executing random steering angles. It gets a feedback signal only when it crashes. At that point, it gets teleported to a new simulated location and has to execute a better set of steering angles to avoid crashing again. Over 10 to 15 hours of training, it uses these sparse feedback signals to learn to travel greater and greater distances without crashing.

Applying Lessons Learned To Real World Driving

After successfully driving 10,000 kilometers in simulation, the researchers apply the feedback trained computer program to an actual autonomous vehicle and let it operate in the real world.  They claim this is the first time a program trained using end-to-end reinforcement learning in simulation has been deployed successfully in a real world driving experiment. “That was surprising to us. Not only has the program never been on a real car before, but it’s also never even seen the roads before and has no prior knowledge on how humans drive,” Amini says.

Forcing the program to run through all types of driving scenarios enables it to regain control from disorienting positions — such as being half off the road or into another lane — and steer back into the correct lane within several seconds. “And other state-of-the-art controllers all tragically failed at that, because they never saw any data like this in training,” Amini says.

Taking It To The Next Level

Next, the researchers hope to simulate all types of road conditions from a single driving trajectory, such as night and day, and sunny and rainy weather. They also hope to simulate more complex interactions with other vehicles on the road. “What if other cars start moving and jump in front of the vehicle?” Rus says. “Those are complex, real-world interactions we want to start testing.”

Astute readers will recognize an important difference between what the MIT researchers are doing and what companies like Waymo and Tesla are doing to create autonomous driving systems. Both companies are trying to train computers to drive like humans. The MIT researchers are training a computer to think like a computer.

The MIT computer program is far less sophisticated than autonomous driving programs Waymo or Tesla have created, but by changing the focus of how a self-driving computer works, their approach could get the world of transportation to a Level 4 or Level 5 autonomous driving future before either Waymo or Tesla gets there.

Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video

CleanTechnica uses affiliate links. See our policy here.

Steve Hanley

Steve writes about the interface between technology and sustainability from his home in Florida or anywhere else The Force may lead him. He is proud to be "woke" and doesn't really give a damn why the glass broke. He believes passionately in what Socrates said 3000 years ago: "The secret to change is to focus all of your energy not on fighting the old but on building the new." You can follow him on Substack and LinkedIn but not on Fakebook or any social media platforms controlled by narcissistic yahoos.

Steve Hanley has 5484 posts and counting. See all posts by Steve Hanley