Open Letter From AI Leaders: Let’s Take A Break For Safety

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

The danger of artificial intelligence is a common theme in science fiction because it allows authors and filmmakers to explore ethical and societal questions that arise when humans develop entities that can rival or surpass their own intelligence. There are various reasons why this keeps popping up.

The biggest one is probably loss of control. Human beings fear the idea of losing control over the AI they create, which makes it a common theme of science fiction. This fear has often manifested in movies like “The Terminator,” where an AI becomes so intelligent that it sees humanity as a threat to its survival and wages war against humans.

Another common fear with AI is that it might become so advanced that it surpasses humans’ emotional and cognitive abilities, leaving them as machine-like beings devoid of empathy and compassion. A good example of this is the HAL-9000 computer in “2001: A Space Odyssey,” which becomes so advanced that it develops an emotional conflict and goes on a killing spree. Another similar concern is that AI might become so intelligent that it learns how to replicate itself and spread throughout the world, a concept explored in “The Matrix.” In this movie, humans created AI that became self-aware and took over the world, subjecting humanity to a cybernetic nightmare they could not escape.

Finally, as AI becomes more advanced, there may also be societal conflicts about AI’s role in human society. This is explored in movies like “Ex Machina,” where a genius programmer creates an AI humanoid, which behaves very much like humans, leading to ethical questions about the AI’s role in society. There’s also a fear of general chaos that might happen if AI takes people’s jobs.

Until somewhat recently, these story lines were often a rhetorical device meant to get people thinking about some other problem or issue in human society. The threat of artificial intelligence seemed very far away, but it could stir up thinking about things like slavery, capitalism, or even love and romance. Complex human issues became easier and more comfortable to explore by temporarily removing the emotional and political baggage of the times in which the writers lived.

But, the time of using AI as an allegory only is coming to a close. Artificial intelligence has seen incredible advances over the past decade. AI has grown exponentially in solving cognitive problems associated with human intelligence, including machine learning, facial and voice recognition, and more. AI technology has also been used to assist humans in various fields, such as online sales, health care, and more. As it has gotten more advanced, artificial neural networks have gotten to the point where they can drive cars (with limitations), and even write like a human would in many circumstances.

With AI moving from science fiction to reality, thinking is changing. The fears are more immediate, and the promise of good things is also more tangible.

So, it shouldn’t be shocking that the Future of Life foundation has released an open letter asking for caution in the development of AI technologies. But, unlike many warnings from scientists and futurists, this letter asks for something specific: a six month moratorium on the development of the most advanced AI technologies while risks can be assessed.

What makes this letter even more notable is that it has the signatures of people like Elon Musk, Steve Wozniak, and Andrew Yang at the bottom, among many other experts and well-known thinkers.

The letter starts by explaining the threat a bit. They say that Advanced Artificial Intelligence could bring about a tremendous transformation in the history of life on Earth, and should be prepared for and handled with adequate care and resources. Unfortunately, they say, this degree of planning and management is not currently taking place, despite recent months showing AI labs being engaged in an escalating competition to devise and employ increasingly potent digital minds that no single person – not even their inventors – can comprehend, anticipate or effectively control.

For this reason, they call for a six month pause on all AI systems more powerful than GTP4, an advanced chat bot by OpenAI. They even call for government to step in and force a pause on any company that doesn’t voluntarily do this. They later clarify that this does not imply a halt on AI progress in general, but rather a diversion from the hazardous race to ever-greater incompatible black-box frameworks with emergent capacities.

Readers are probably wondering why they’re asking specifically for a six month pause, and the letter answers that question. They say that AI labs and independent professionals ought to use this period of respite to collaboratively craft and implement a collective set of safety rules for advanced AI pattern and development that abide by stringent assessment and supervision from impartial external experts. These regulations should guarantee that systems complying with them are safe beyond a reasonable doubt.

They also call for developers to work with governments to develop rules and laws for AI development. According to the letter’s authors, these should encompass: specialized regulatory entities devoted to Artificial Intelligence; monitoring and surveillance of highly effective AI systems and large caches of computational aptitude; provenance and stamping protocols to differentiate real from artificial and to trace model breaches; a sound auditing and accreditation system; accountability for AI-induced injury; solid public financing for applied AI safety inquiry; and well-furnished organizations for dealing with the dramatic economic and political upheavals (particularly to democracy) that Artificial Intelligence will generate.

Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!

Is This A Good Idea?

Personally, I don’t think this letter’s demand really addresses the risks associated with artificial intelligence. They call on governments to be a big part of the solution here, but let’s face the fact that governments have a terrible track record with the misuse of advanced technologies, particularly in the 20th century. The number of people killed by their own governments, not including deaths in war or due to criminal activity, exceeds every other non-natural cause of death.

To call for government regulation is not only a call to put the fox in charge of the hen house, but to put the fox in charge of the hen house’s owners. You can bet that nefarious government labs working to weaponize AI technology would not honor any such moratorium, instead using the pause to gain advantage over any government entity that participates.

Further, it’s foolish to expect AI labs and even individual experimenters to participate in the halt. If anything, this call for a halt by ethical AI developers would only stop good people, and wouldn’t stop bad actors from doing unethical things with the technology.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.

Jennifer Sensiba has 1956 posts and counting. See all posts by Jennifer Sensiba