If you are the National Rifle Association, you are all in favor of weapons like drones, tanks, machine guns, and submarines controlled by artificial intelligence. After all, the Founding Fathers clearly anticipated such things when they were making provisions for a “well regulated militia.” Death-dealing devices that tear flesh from bone in the most gruesome manner possible without showing any emotion are just the thing to give NRA members wet dreams at night.
Elon & 116 Of His Closest Friends
Elon Musk and 116 of his closest friends in the tech community from 26 nations around the world see things differently, however. They have issued a call to the United Nations to ban killer robots, The Guardian reports. Calling AI-controlled munitions the “third revolution in warfare” after gunpowder and nuclear arms, the group’s letter to the UN says, “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
The signatories, such as Mustafa Suleyman of Alphabet, issued their manifesto in advance of the International Joint Conference on Artificial Intelligence taking place this week in Melbourne, Australia. They call for “morally wrong” lethal autonomous weapons systems to be added to the 1983 list of weapons banned under the UN, including chemical and intentionally blinding laser weapons.
Dictators & Lunatics
The group worries such “super weapons” would embolden dictators and lunatics like Kim Jong-Un, Bashar al-Assad, Vladimir Putin, and Donald Trump. Similar UN bans have worked reasonably well in the past. Syria only has several hundred tons of chemical weapons stockpiled and there are only about 110 million landmines deployed in 108 countries with another 250 million in reserve in case the first 100 million or so fail to deter intruders. Nuclear non-proliferation treaties have been equally as effective.
Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, says: “Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today — inequality and poverty, the challenges posed by climate change, and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want.”
Ryan Gariepy, the founder of Clearpath Robotics, adds, “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
AI Weapons Are Already A Thing
A similar letter two years ago goaded the UN to start talking about the issue, but no firm progress toward a ban has taken place. The first known artificial intelligence weapon system is Samsung’s SGR-A1 sentry gun, which is currently deployed along the South Korean side of the 2.5 mile wide Korean Demilitarized Zone. South Korea is cryptic about whether that system is fully autonomous or under human control. Suffice to say that staring into the DMZ for hours on days on end could lead to lapses in surveillance. Computers never blink, stir, or take a cigarette break.
Other weapons systems coupled with artificial intelligence are under development. The UK is working on its Taranis drone. Developed in cooperation with BAE Systems, it will be capable of carrying air-to-air and air-to-ground missiles across international boundaries in full autonomous mode. It is expected to be operational in 2030, when it will replace warplanes with human pilots.
Russia, the United States, and other nations are currently developing robotic tanks that can either be remote controlled or operate autonomously. The US launched its first autonomous warship, the Sea Hunter, in 2016. Boeing is working on an autonomous submarine systems built on the Echo Voyager platform that could also be used for long range sea battles. The search for new and improved ways for human beings to slaughter each other never stops.
The Greatest Existential Threat To Humanity
Elon Musk has been prominent in the artificial intelligence field, calling AI the “greatest existential threat” to mankind ever known. He wants to keep the genie in the bottle while taking advantage of the wonders that AI can bring to humanity. Considering the chaos a few bored teenagers in Macedonia caused in the last US election, the likelihood of preventing the misuse of AI technology by writing letters or signing treaties is somewhere between slim and none.
In the final analysis, the urge to kill each other is one of the defining characteristics of humanity. Perhaps a million or so years from now, after the next great extinction, Charles Darwin and biology will conspire to breed that trait out of human DNA.
Source: The Guardian
I don't like paywalls. You don't like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don't like paywalls, and so we've decided to ditch ours. Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It's a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So ...
Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.