The biggest change worldwide in the last decade was probably the smartphone revolution, but overall, cities themselves still look pretty much the same. In the decade ahead, cities will change a lot more. Most of our regular readers probably think I am referring to how autonomous vehicles networks will start taking over and how owning a car will start to become closer to owning a horse. However, the real answer isn’t just the autonomous vehicles on the roads — they will likely also compete with autonomous eVTOL aircraft carrying people between hubs.
Today, the European Union is moving one step closer to making this second part a reality. Together with Daedalean, an autonomous flight company we have covered in the past, EASA published a new joint report covering “The Learning Assurance for Neural Networks.”
EU’s existing AI Guidelines
Before we get into what that means, there are a few things you might want to know. First of all, when it comes to AI, the European Union has decided to focus on ensuring all AI is “trustworthy” and follows four ethical principles: respect for human autonomy, prevention of harm, fairness, and explicability.
This somehow sounds suspiciously similar to Isaac Asimov’s “Three Laws of Robotics.” The EU also has seven key requirements for AI which we won’t list since they can also be found in the graphic above. So if you have seen in the news how AI sometimes accidentally turned out to be biased against a specific race, gender, or color of skittles, then its training as well as an insufficiently diverse dataset are to blame. That is one example of the kinds of problem the EU hopes to avoid.
Another AI conundrum that is a lot more relevant to today’s news is AI safety, transparency, and accountability. The biggest problem with AI is that it is currently programmed to do the very best job it can of performing a task, is understanding why it did what it did. Explaining how it came to a decision is not something most AIs are programmed to do and trying to sift through all the programming would take a lifetime and rarely adds enough value to justify the effort. Imagine that an AI is designed to make decisions in court. It’s important that the AI is trustworthy and that it is easy to understand how the AI came to a verdict.
Regulators see AI as a magic black box that spits out data, and rightly so. In order to certify AI for use in various applications, they want a step by step method to assure that A always leads to B. It is no easy task, but far from impossible.
Concepts of Design Assurance for Neural Networks
Autonomous flight expert Daedalean has been working with EASA for over a year and has now published the culmination of that work in the form of an extensive 104 page report (PDF) that builds upon the EU’s general AI guidelines and EASA’s AI Roadmap.
For certification in safety-critical applications, there need to be assurances that an AI will function the way it is supposed to. In order to get these assurance, the AI needs to be taught correctly with standardized methods. The new report, “constituted a first major step in the definition of the ‘Learning Assurance’ process, which is a key building-block of the ‘AI trustworthiness framework’ introduced in the EASA AI Roadmap 1.0.”
Unfortunately this report does not dive deeper into transparency, something the report refers to as ‘explainability’. This will instead be covered in future work.
The development of AI in the world is moving forward quickly, the aviation industry specifically is moving at a pretty quick pace with over 200 startups working on eVTOL aircraft. The pace of AI in the aviation industry however, is less clear. Above, you can see EASA’s timeline for AI regulation, which estimates the first commercial product of an autonomous eVTOL aircraft to be no sooner than 2035. Daedalean, on the other hand, is more ambitious than that.
In response to our inquiry on the matter, they said:
“Daedalean has the goal to have the technology ready before the regulatory approvals are made. We have the commitment and expertise to shorten these timelines. The company is already working on implementing the guidelines for their products. As a next step, we want to generalize, abstract, and complement these initial guidelines, so as to outline a first set of applicable guidance for safety-critical machine learning applications by 2021.”
We saw Daedalean’s developments in person and are convinced they indeed have the expertise needed to shorten these timelines. Back in September, the company demonstrated how their AI that the guidelines are based on can detect and select emergency landing spots when necessary. The AI can then avoid any person or object that enters potential landing spots and even auto-abort landings when necessary. In many ways, these guidelines are going to be of more use to companies trying to follow in Daedalean’s footsteps and are actually relevant for any form of safety-critical machine learning, from autonomous driving to medical devices, not just aviation.
I don't like paywalls. You don't like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don't like paywalls, and so we've decided to ditch ours. Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It's a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So ...
Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.