Autonomous Vehicles google-car-gray

Published on February 16th, 2016 | by Kyle Field

57

Google Self-Driving Cars Now Considered Drivers By NHTSA

February 16th, 2016 by  

Google_Self_driving_car_lineup

In a landmark ruling, the National Highway Transportation Safety Administration (NHTSA) recently determined that Google’s Self-Driving Car Artificial Intelligence is now considered to be the “driver” of some of the first autonomous vehicles on the roads, The Guardian shares.

The official interpretation of the definition of a “driver” came in the form of a letter from the NHTSA in response to an inquiry made by Google as part of the Self-Driving Vehicle project and contains quite a few interesting nuggets, starting with an admission that the current standards are no longer sufficient:

“As self-driving technology moves beyond what was envisioned at the time when standards were issued, NHTSA may not be able to use the same kinds of test procedures for determining compliance.”

To arrive at the final interpretation, Google was pushing to understand how the current Federal Motor Vehicle Safety Standards (FMVSS) that govern the nuances of driving to ensure safety would be applied to Self-Driving Vehicles (SDVs). Google proposed the following:

1)     NHTSA could interpret the term “driver” as meaningless for purposes of  Google’s SDV, since there is no human driver, and consider FMVSS provisions that refer to a driver as simply inapplicable to Google’s vehicle design;

2)     NHTSA could interpret “driver” and “operator” as referring to the SDS; or

3)     NHTSA could interpret “driver’s position” or “driver’s designated seating position” as referring to the left front outboard seating position, regardless of whether the occupant of that position is able to control the vehicle’s operation or movements.

These definitions are especially relevant for Google, as it has taken an approach to self-driving vehicles that removes the typical user inputs that allow an occupant to drive the vehicle. You heard right — Google’s SDVs do not have steering wheels, brakes, or accelerators, leaving all of that to the car. This type of hard cutover to future tech makes sense, but does not allow for any transition period to autonomous driving, instead embracing the technology and optimistically leaning into the future.

google car

Before you run off looking for a Google Car dealer or car service near you, it is important to realize that there is a key sticking point in the document — this definition is contingent upon achieving “Level 4” autonomous driving capability. The NHTSA defines that as (link to source PDF):

Level 4 – Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles. By design, safe operation rests solely on the automated vehicle system.”

Ironically, this definition conflicts with the updated interpretation of “driver,” stating that the driver will provide the destination. 🙂 An additional caveat in the interpretation was that the NHTSA would evaluate the software to ensure that it was sufficient to provide safety to the occupants, to other drivers and to the public. Considering how long it took for regulators to find the software-enabled emissions cheats that Volkswagen was just exposed for, this offers little comfort, especially when the SDVs are being programmed by the best and the brightest programmers in the world over at Google.

The interpretation also addresses and/or touches on many of the subtler points related to autonomous driving:

  • Does the car need rear-view mirrors if the “driver” has no eyes? Interpretation: no need for rear-view mirrors as long as the SDV has “visibility” behind the vehicle equivalent or greater than a rear-view mirror would have provided to a human driver.
  • Does the car need standard indicators, lights, and other “telltales” that are visible to the “driver?” Interpretation: because the driver is not human, there is no need for all of the visible indicators that are required in non-autonomous vehicles. Presumably, there might even be a few new standard indicators or signals added to tell the passenger(s) how the “driver” is doing, like “rebooting,” “stopping to recharge,” “please clean front sensor #1,” etc in the future.
  • Does the brake pedal have to be depressed (as a safety interlock) prior to allowing the vehicle to switch into drive mode because, y’know, Google SDVs won’t have a brake pedal? Interpretation: no need for this interlock, as it can all be handled through logic.
  • Current regulation dictates that Electronic Stability Control performance be measured based on steering wheel position — how will performance of electronic stability control be measured in vehicles without a steering wheel? Interpretation: the NHTSA agrees that it will not be possible to measure performance based on the steering wheel position if there is no steering wheel but a suitable alternative will have to be identified. This is an encouraging interpretation, as it shows that the NHTSA is open minded in adapting to new tech, leaving room for the new technology to grow, evolve, and integrate into the current regulations.

This ruling is especially interesting, as it shifts the burden of liability from the vehicle owner or perhaps the occupant being carted around in a fleet vehicle… to the vehicle manufacturer. A 2014 study on the impacts of autonomous driving by Rand Corp cited (on page 111) a study by Gary Marchant and Rachel Lindor, The Coming Collision Between Autonomous Vehicles and the Liability System (2012, p. 1334), which outlined one such scenario:

“The technology is potentially doomed if there are a significant number of . . . cases, because the liability burden on the manufacturer may be prohibitive of further development. Thus, even though an autonomous vehicle may be safer overall than a conventional vehicle, it will shift the responsibility for accidents, and hence liability, from drivers to manufacturers. The shift will push the manufacturer away from the socially optimal outcome—to develop the autonomous vehicle.”

It is obvious that we are at, or fast approaching, an intersection where human drivers relinquish control of personal transportation into the hands of not a centralized human driver or conductor but, for the first time in automotive history, to artificial intelligence. These recent interpretations not only open up restrictions on manufacturers, but really open up current regulations to allow manufacturers to start playing with autonomous driving technology in the real world.

Let’s all just hope that these manufacturers put enough miles into their tech to ensure there aren’t any failures out in the real world, as it’s anyone’s guess how one accident (or even worse — a fatality) caused by autonomous driving technology would impact the progress that has been made so far.

google self-driving suv

 
 
Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”
 
Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.

 

Tags: , , , , , ,


About the Author

I'm a tech geek passionately in search of actionable ways to reduce the negative impact my life has on the planet, save money and reduce stress. Live intentionally, make conscious decisions, love more, act responsibly, play. The more you know, the less you need.



  • Panim Bilvad

    I don’t mind the teen pranks of jumping in front of a car as mentioned in the comments, but I do fear the thieves that will dress as the Lone Ranger or Batman and stand in front of the car until you either “donate” for a cleaned windshield or just give up the car.

    Will the car be able to get a carry permit for a fire arm?

  • Panim Bilvad

    It is my belief, that self drive cars DO have a place on the road. A closed or well marked and posted community can have self drive cars. But a 4-way stop, needs to have a way for all the drivers to know which car arrived first – even by 1/100 ths of a second, so the human and computer know who goes first.

    A inter-city road would need to be inspected weekly and be well posted that it has passed inspection, for a self drive car to use the road.

    I have seen too many poorly marked roads, roads with lane markings going off into bridges, and sand covered roads, to imagine a self drive car being able to navigate.

    I have read that self drive cars can navigate fog, but we have sand storms.

    Self drive cars should keep a distance between cars, the car in front may not brake, but hit something and your car should be able to stop before joining.

    • Bob_Wallace

      Google self-driving cars have now registered over a million miles on the road with only this single fender-bender accident on their record.

      It will take some time to iron out the very low frequency problems such as this encounter with a bus.

      Tesla has what is probably a better system. Every Tesla with auto pilot capability feeds information about the roads into the system. Tesla will quickly get many millions of miles of experience as they work the bugs out of their software.

      There may be some situations in which autonomous cars can’t perform. There are situations in which humans can’t perform. Perhaps it’s foolish to drive in sandstorms. It is foolish for humans to drive in dense fog.

      • Panim Bilvad

        Bob, pardon my distrust of statistics. Please reply with the miles on “limited access roads”,

        “local roads” and detail the number and types of intersections. I have driven 20k kilometers for over 50 years, mostly on limited access roads, and NEVER had an accident on a limited access road.

        • Bob_Wallace

          Self driving cars will be designed, are being designed, for highest use roads first. Off road stuff will come later.

          • Panim Bilvad

            What off road? I ask how many urban and how many non urban and how many intersections? Intersections are with/without traffic lights, traffic lights with turn lanes, with lights burned out etc. Non traffic lights, include merge, stop, 4-way stop, yield, and more. I would especially appreciate if the stop sign was knocked down, will the car see it?

  • Panim Bilvad

    I doubt there will be a problem with “responsibility” when crashes occur. I saw the damage to the bus in the “Google – bus” accident. If there were a human driver, he would have been blamed for the crash into the side of a bus.

    Google claims, their car was “Partial” responsible. Since anyone suing the “driver” would be against a well funded manufacturer, who can scan and analyze millions of court documents to prepare a case, the Google car will never be at fault.

    • Bob_Wallace

      The Google car was at fault according to Google.

      The software “expected” the bus to change course but it didn’t. The Google software has been modified.

  • Matt

    Oh my god, the analysis of the NHTSA letter is so wrong that it is painful. There was not a landmark “ruling” on the meaning of driver. There was no “ruling” at all. For the love of god, talk to an attorney before you write this garbage.

    http://www.lawandai.com/2016/02/12/no-the-nhtsa-did-not-declare-that-ais-are-legal-drivers/

  • O[b]ama

    Manhattan’s Ground Zero is the World Transportation Center (WTC), the epicenter of America’s new autonomous car infrastructure.

  • neroden

    This has implications for liability. If a Google car runs someone over, *Google* and their programmers are liable.

    • Bob_Wallace

      They will need to buy good auto insurance. Since they won’t be driving drunk, texting while driving, ogling a cute boy/girl, or continuing to drive down the mountain after their brakes have started to fade their rates are likely going to be very low. In fact, low enough for them to self insure….

      • Frank

        You know what is going to be a real problem with this tech? Teen age pranks like jumping out in front of an autonomous car to get the passenger thrown into their seat belt. Cause you know the damn thing won’t run you over.

        • Bob_Wallace

          Car has camera. Teen gets busted.

          Probably not a game that will catch on….

          • Frank

            I don’t know if you have guessed at this yet, but I appreciate the thought you put into all of your posts.

  • Riely Rumfort

    I feel like automated vehicles should be highway only and have hubs near exits where a human driver gets in for city driving. That way they can transport goods, making the long drive and then a human driver still does the deliveries.
    Some may say ‘What about cargo drivers?’ I think the 4-8hr highway drives are bad for their backs and overall health anyways, this whole thing cuts out the largest drawback and safety hazard for other(drivers falling asleep mainly).
    I don’t want life in the city left up to robots though, I do not trust them for complex judgements.

    • tibi stibi

      i have more trust in a car that can monitor 360 degrease non stop than a human whit a navigation a telephone and nagging kids.

      • Riely Rumfort

        Well yeah, distracted humans are poor as well. But that robot won’t see a kids legs running between cars, it won’t see a missing manhole cover, it won’t choose to to swerve into something flammable. Etc etc

        • Joe Viocoe

          It can, and will.

          And all those scenarios are improbable compared to the scenarios which HAVE proved fatal for 30,000 people every year.

          • Riely Rumfort

            Improbable? You know human many times a week I see kid’s legs or bikes zooming between cars?

          • Bryan Hilderbrand

            Doesn’t have to be perfect either, just has to kill less than 40,000 people a year. I’m sure your an excellent driver, but the Illusory superiority cognitive bias is huge for drivers (i.e. the vast majority of drivers think they are better than average drivers).

            The only issue I’m worried about is preventing these things from being hacked.

          • Riely Rumfort

            >10,000 of the 33,000 were alcohol caused, most the other we’re either distracted or driving condition based. Much of the time a highly aware driver can take extra care and be conscious of this, by watching for swerving on approach, noticing a phone or head tilted to look at a screen or noticing the onset of weather. I myself have driven >500,000 miles without causing an accident.
            Instead of investing a billion in AI for driving we could have free cabs for those whom blow over a certain limit, cell phone which turn off anything beyond hand free when going over 25 for people younger than say 24(highest death stat group), and invest in better plow/salt gps driven coverage distribution. Spitballing but, there are other ways to save lives.

          • Brett

            Is the fact that alcohol caused 10,000 roadway fatalities somehow supposed to discount those deaths? Self-driving cars don’t get drunk. Problem solved.

          • Otis11

            Well, they would already kill less than 40k people per year… they need to be MUCH better than that. More than an order of magnitude better…

          • Kyle Field

            Welcome 🙂

          • Jonathan Laurin

            Leave the hacking to the FBI, after all they wanted apple to give info to top secret codes, when all they need to do was take the bloody phone to METRO PCS, for a small fee. Instead they bother the NEWs with needless wasted air time!

          • Bob_Wallace

            And how many kids zoom between cars and are hit because the driver was looking to their left to see if they could make a lane change?

            Few of us can look in two directions at the same time.

          • Riely Rumfort

            It’s mainly left to peripheral vision and quick scans in each direction. Humans may miss legs/bikes between cars if not vigilant, but a smart car misses these legs 100% of time.. One of many recognition issues currently. Still has a long way to go.

          • Otis11

            “but a smart car misses these legs 100% of time.”

            … I really don’t understand why you’re making this assumption. Seeing under cars is actually much easier for sensors than humans (given their mounting position).

            Combined with the fact that living things (people, squirrels, dogs) are exceptionally easy to identify, track and predict (Seriously, you should see an autonomous cars’ algorithm track a kid chasing a dog chasing a squirrel – it’s incredible.) I really don’t see your concern.

            It’s actually harder to tell a can of coke from a dirty car… seriously.

            I agree there’s still a long way to go – but you are definitely on the wrong track here. None of your concerns are actual issues – take comfort in that.

            The real issue, however, is reliability. This is actually a huge issue with human driven cars as well – we just accept it. How many accidents/year are caused by a brake/transmission/airbag/whatever-else failure? (12% to 13% according to the U.S. Department of Transportation Bureau of Transportation Statistics – whose name is a bit redundant and nonsensical, but that’s besides the point…)

            If automated vehicles cause even 1% of the accidents/fatalities per year (10 times more reliable than current cars – even without accounting for our accident prone drivers) – that’s 27 thousand injuries and 430 deaths. Society will have an uproar.

            A fallible person – they understand; machine failure – they dislike, but understand; computer failures – someone’s head will end up on a spike and ‘you no can haz self-driving car!’

          • neroden

            Good luck with having the automated car spot deer at the side of the road in the dark… *and* figure out that this means you have to slow down *preemptively* before

            It’s details like that where I do not trust the programmers to program them properly. Of course, 99% of humans don’t do this properly either, but…

            You are the first person I’ve run into who advocates automated cars to actually understand the acceptance problem — if they cause even 1% of the crashes of regular cars, people won’t accept them. It’s a cognitive bias; people would rather be killed by humans than killed by robots. I congratulate you on actually *getting it*.

            People still won’t accept automated trains on lines with grade crossings even though they are much safer than manually driven trains and have been proven in use since the 1970s!

          • Bob_Wallace

            The autonomous car will be able to “see” the deer in the dark and the deer standing in the bushes along the road because it will be able to see heat.

            Simple algorithm –

            1) Hot object detected, treat as live organism.

            2) In the event that the hot object moves in a direction that would increase the chance of striking the vehicle, engage in avoidance action.

            Simple human –

            “Oh shit, where did that deer come from? I didn’t see it until it came over the hood and through my windshield.”

            Yes, there will be an acceptance problem. But as each feature is installed and used by drivers their acceptance will increase. The last phase will be drivers continuing to hold the wheel for as many days or years that it takes for that particular driver to trust the system.

            No one is going to be forced to ride in a Google type driverless car. Some people may be reluctant to ride in a driverless car until they have personal experience of watching people they know being killed by drivers and not killed by autonomous cars.

            I assume autonomous cars will not be allowed to operate on our roads without a standby driver until they are close to 100% perfect.

          • Frank

            I like Elons assertion. The car needs to be 10 times better than humans before they will be accepted. I think if they hit that mark, and they are allowed on the roads, insurance companies are going to be very quick to adjust.

          • Otis11

            Unfortunately, he was simplifying there. 10 times is not nearly sufficient – and he’s admitted as much. The social backlash would be enormous!

            I believe he corrected himself to say “6 9’s after the decimal. As in it functions correctly 99.999999% of the time” but I could be mis-remembering…

          • Kyle Field

            It does not behoove insurance companies to quickly adjust. I would bet that they will increase insurance for autonomous drivers because “we don’t have enough data on them yet” until forced by regulators to lower fees 🙂

          • Frank

            I don’t think it will require anything from regulators. I think it will only require competition. Now if your car is the first one your inurance agent has ever heard of that is self driving, I would set my expectations low, If on the other hand, the numbers are big enough that there is real money in it, then I would suggest somebody is going to run the numbers, and competiton will do what it always does.

          • Otis11

            Yeah, I don’t get this. We can’t see the deer in the dark – but the infrared AND sonar both can… easily. And every self-driving car algorithm I’m aware of does react preemptively.

            I’d be more concerned if the human sees the thing in time…

            And yes – honestly I think the hardest thing about rolling out self-driving cars will be societal acceptance (as increasing reliability enough to make accidents almost non-existent is a ways off – and totally non-existent is impossible).

            As for why I advocate for self-driving cars (*despite* understanding it – as you say) is because I have a solid understanding of mathematics. I understand that it is possible that a self driving car may one day deprive me of my life, or the life of someone I care about. But I also understand that human driven cars have, do, and will continue to, deprive us of human lives. While it may be harder to accept the reason for the autonomous accidents, the simple truth is there will be fewer of them. I would rather have fewer incidents that are hard to explain – and save many other lives – than know that every day we needlessly wait will claim even more lives. (To be clear, I’m not claiming we’re waiting needlessly right now – we do need to do sufficient development, testing and develop regulations to keep underdeveloped cars off the roads, but I do think we will delay them after they have reached that point – which would be more than necessary.)

          • Joe Viocoe

            You do not trust, because you do not understand. Fear of unknown. You may anecdotally notice a bunch of scenarios you may think could confound a computer. But that is like a new car owner suddenly noticing the same model on the road. Look at reality. It doesn’t support your claims most accidents, fatal or not, involve these weird “came out of nowhere” scenarios that sensors would not pick up.
            The vast majority involve objects that are as plain as day… and it is the human eyes and attention span that have the numerous blind spots.

          • Joe Viocoe

            I don’t trust YOU (or I). As hard as we try… as responsible as we may seem…. We are a flawed human being and impossible to upgrade. Our eyes will fail, our attention will wander, and our reaction time will falter.

          • Jonathan Laurin

            With no disrespect, but you do not know what your talking about.

          • Brett

            That’s just the 30,000 in the US alone every year. The tally is a staggering 1.3 million annually around the world.

        • tibi stibi

          the biggest difference between computer cars and humans is that when a computer car makes a mistake it can be corrected and all computer cars will not make this mistake again. with humans every human has to learn in.

          google is now driving thousands of miles to test their car they have driven more then an average human does.

          • Riely Rumfort

            I understand what computers are capable of. I was programming chat bots/moderators when I was 8 and have built a handful of computers since and am fully aware of the pattern recognition, acknowledgement, and action computations.
            It can/will improve to a point, and may be more statistically accurate than humans, as of yet there is too much room for error.

          • Otis11

            Mmm… this point is outdated too. Computer AI has passed the accuracy of trained humans in just about every routine task we perform… In 2015 it even passed us in accuracy of classifying images – aka, machine vision is better than a ‘trained’ human at classifying objects in pictures… much less the average joe.

            Add in that now the computer can ‘see’ in different wavelengths and has super-human senses…

            Now there are still issues (I’m reasonably familiar with self-driving technology…), but the ones you’re naming really aren’t a problem.

          • neroden

            No, it didn’t. It’s still worse at classifying objects in pictures in “weird” situations. The trouble is they measure these things from averages…

          • Bob_Wallace

            How many million miles do you think autonomous cars would have to drive without causing a single accident before you would consider them better than human drivers?

            It looks like human drivers have about 2 accidents per million miles driven.

            https://www.dot.ny.gov/divisions/operating/osss/highway-repository/Table2_2014.pdf

          • Brett

            That’s 2 accidents per million miles, not necessarily 2 accidents caused by the driver either. That’s why Riely very carefully stated, “I myself have driven >500,000 miles without causing an accident.” They didn’t state they we’re never in an accident.

            I’ve probably driven 150,000 miles in my life, and I was rear-ended while at a dead stop and had my car totalled. I also was riding a bus where the driver failed to stop in time for an oncoming vehicle and 6 people were killed. The cause, after much investigation, was driver error.

            I guess anecdotal evidence goes both ways.

          • Bob_Wallace

            Count up all the miles driven. Count the number of accidents. Divide. For every 1,000,000 miles driven there are about 2 accidents.

            I suppose some happened because of what the car did – blow out, popped ball joint, brake failure.

            So let me rephrase my question. How many millions of miles would autonomous cars need to drive without causing an accident before you consider them better than human drivers?

            Last I heard Google was over 1 million miles with zero accidents attributable to the car. The four accidents of which I’m aware were human caused.

          • Brett

            Me personally? They’re already over my threshold. I’d buy one tomorrow if they we’re available. Seriously.

            I was more trying to demonstrate the flip side of the whole ‘I’ve never been in an accident, so there is no way a machine could drive better than me’ argument.

            In my opinion self-driving cars are only a question of when, and I believe that they are already safer than the average person, but regulatory authorities are holding self-driving cars to a much higher safety threshold that other automakers in terms of vehicle safety. The playing field isn’t level, the biggest liability in any car is the person behind the wheel.

          • Otis11

            Mmm… Jeff Dean of the Google TensorFlow team would care to argue with you. I just talked to him at the end of January about this.

            A (Trained) human looking at a difficult to classify image has an error rate (as classified as a top-5 error) of about 5.1%. A Deep Learning Algorithm achieved an error rate of 3.57% on the same dataset in 2015.

            For reference if you don’t want to believe my numbers: http://arxiv.org/abs/1512.03385

        • Jonathan Laurin

          What is wrong with you people, go test drive a TESLA and get some knowledge about these things before you begin making comments on things you have no idea how it works!

          • Riely Rumfort

            You people!
            You racist. ;P
            Driving a nice electric car doesn’t speak for the ability of AI, or even speak to a lack of glitches.

    • Bob_Wallace

      Automated vehicles will come online one feature at a time. Adaptive cruise control, collision avoidance and lane holding first. Well, self-parking got there earlier.

      Fully automated car won’t be turned loose until all the features have proven themselves to be as good or better than humans.

      • tibi stibi

        yes it has to be proven but i’m not sure which will be the way. will it be an evolution like tesla or will it be a revolution like google.

        • Riely Rumfort

          I think Qbits will end up crucial myself..

        • Bob_Wallace

          I’m just guessing. My guess is that Tesla will have the first autonomous car driving the roads at highway speeds. Google will be running driverless cars in more restrictive (lower speed) conditions.

          I’m basing that on Tesla already in the full sized car manufacturing business. Google doesn’t yet have a car factory.

          That said, another company could be first. Someone might get there first with an ICEV or PHEV.

      • Brett

        Won’t be a difficult feat, they just have to be better than 75% of drivers to provide life-saving benefits.

        • neroden

          They have to be better than way more than 90% of drivers to be accepted. 🙁 Better than 99%, probably.

Back to Top ↑

Shares