I know that headline alone is going to give a few people on here an aneurysm, but before you get mad enough to fall over dead (or faint like a goat), let’s look at what I’m not saying first. I’m not going to argue in this article that we don’t want self-driving cars, nor am I saying that we don’t want what Tesla is aiming for when it says “Full Self Driving.” What I do want to argue here is that “Level 5 autonomy,” as defined by SAE, isn’t realistic or even desirable.
Humans Aren’t Level 5
This article was inspired, in part, by an article in June by Alex Roy. If you’re not familiar with him, he’s the guy who set the first Cannonball Run record in decades, going from New York to Los Angeles in 31 hours, 4 minutes. Today, he works for Argo AI, a company that works with Ford and Lyft to develop autonomous vehicles. Some people think he’s a Tesla hater, but he is constantly reminding people that he’s a happy Tesla Model 3 owner (despite the imperfections and things he doesn’t like about the company).
In his article, he uses some absurdity and comedy to explain the limits most people would place on safe driving. He points out that there are places and situations that very few vehicles would be able to go. We can’t all drive tanks or Unimogs, and even if we could drive those everywhere, we wouldn’t want to. Vehicles are optimized for the things people want to do with them, and give up a lot of capability in one area to be the best damned thing they can be for other situations that are important to the owner.
Not all drivers (or computers) are the same. Hardware (the vehicle’s capabilities, in terms of what terrain it does best on, how many seats it has, cargo space, etc.) is important, but who (or what) is driving the machine matters as well. A situation one driver would do well in, another driver might do poorly in, even in identical vehicles.
Last month, I went to Ford’s Bronco Off-Roadeo in Austin, Texas for a few hours. I’ve always been the kind of person who takes off-roading technical challenges slowly and methodically. For me, it’s all about getting out there and seeing the sights. I had a lot of fun, and Ford’s instructor pushed me to my limits, and then a little further. We then changed seats, and all hell broke loose. He was the kind of driver who has a lot of experience driving off-road at much higher speeds than I’m accustomed to. It was a very fun ride, like a roller coaster, but isn’t something I’d personally consider doing because I’d just screw it up and get people hurt.
Same exact vehicle, but with two different drivers whose skillsets and experience differed wildly. Thus, we had different limits in terms of speed, where we’d go, etc. To stay safe, I had to tap out a lot sooner than he did. Is that a little embarrassing? Sure. Ford’s driver was better than me, and I’d be lying if I tried to say that didn’t sting the egotistical part of me a little. I had my wife and one of my kids in the vehicle, though. I’m not going to take chances when their lives are at stake, no matter the cost to my ego.
It’s kind of like when Alex Roy’s article said, “Before I let my daughter nap in the back of one of these things, I need to know that it knows when to say NO WAY. I want to know that it will stick to roads where it can and will drive more safely than I will.”
We all have our limits, and we need to not only know what they are, but respect them.
Limits Are The Key To Safe Human Driving
Really, knowing limits is something all good drivers do. An immature or idiotic driver would do something stupid like street race on a North American “stroad.” Stupid people barrel headlong at high speeds into risks they don’t understand, and then think they can get away with it because it worked out a few times. A good driver, on the other hand, knows their limits and stays well within them because they know they’re not invincible.
The same thing happens with weather conditions. Drivers in Canada know how to deal with driving when there are feet of snow piling up. Drivers in El Paso and Phoenix get in epic wrecks and there are road closures with as little as 1–3″ of snow. A good driver in Canada and a good driver in Phoenix will make very different decisions about driving in winter conditions because their experiences differ.
Risk management expert Gordon Graham explains how this works for humans:
Humans rely on Recognition Prime Decision Making (RPDM). We look at things we’ve done in the past, and when the past had a good result, we do the thing that brought a good result again. Things we do all the time, we are good at. Things we only rarely or never do, we just aren’t good at dealing with. When the risks are low, it doesn’t matter much if we mess up, but when we encounter a situation that we rarely see with big possible consequences, that’s where the real danger lies.
The thing that makes driving an even more critical task is the lack of time to think it through (discretionary time). If something unusual comes up while you’re going 60–80 MPH, there’s just not going to be a lot of time to think it through and get it right. You’ll get one chance to get it right, and one chance only, and then you’re stuck with the consequences. When 5000 pounds of steel, aluminum, and batteries are involved, those consequences are a pretty big deal.
A smart driver knows when they’re encountering a high-risk, low-frequency, limited-decision-time event. When it rains hard, we slow down to get more thinking time. If it’s bad enough, we pull over entirely and wait the high risks out.
Artificial Neural Networks Are Similar
“But, Jennifer, Tesla’s FSD computer is going to be better than humans!”
—The comment section, about 10 minutes after this article goes live.
Sure, machines don’t get drunk, have emotionally-wrenching fights with their spouses over Bluetooth, or get distracted watching pr0n on a smartphone. A computer that can take over the driving task and perform better than a human does at their best would definitely be a good thing.
They use RPDM, too, though. Ask any big Tesla fan: Tesla’s advantage comes from the massive amount of training data they’re able to feed into the training to produce a superior neural net. Like humans, these neural networks are fitting themselves to past experience, which they can get indirectly through data, unlike us. For the most common situations, a neural net is already performing very, very well. It’s something they’ve seen millions or billions of times, and they “know” just what to do.
The low frequency “edge cases” befuddle computers the same way they challenge humans’ limits. When they encounter something where the training data is limited or non-existent, mistakes start to creep in. Things like not knowing the difference between the moon and a yellow traffic light may seem silly from a human perspective, but the similarities are still there.
An autonomous vehicle with no limits might be impossible. There will always be limits to what the vehicle can deal with, even if they’re rare. Being able to identify that the limit has been reached (or soon will be) and either re-route or safely come to a stop is important for safety.
A driving computer with limits for extreme weather, natural disasters, and other things that are beyond its capabilities isn’t going to be true Level 5, but that’s OK. A really good Level 4 with safe limits is what we all really want.