Why Would AI Be Such A Threat?
While this shouldn’t be surprising to CleanTechnica readers, Elon Musk has been warning people about the threat of AI for years. “I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane,” Musk said at SXSW in 2018, and it wasn’t the first time he had discussed the dangers of AI. He had previously said AI was a bigger danger than North Korea and that it was a risk to the future of human civilization.
The problem with AI, according to Elon Musk, is that it could quickly become vastly smarter than humans. If it has a goal to make us happy, it might make us happy in horrifying ways, like capturing us all and injecting us with hormones known to make us feel happy. Even if it’s better than that, and knows better than to do something like that, it would be something immensely powerful that we would struggle to get any control over, while it could exert control over us easily. It would be playing chess while we struggle to play checkers, and it could outsmart us at every move if it chose to.
Many people in the field think that Musk is being an alarmist, and we haven’t heard that any world governments take AI as seriously as he does. “I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic,” said Facebook CEO Mark Zuckerberg. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.” Other experts have said similar things.
Despite the criticism, he still was saying things like, “Mark my words. AI is far more dangerous than nukes. Far.” in 2019. He pointed out that many experts dismissed his concerns as if they applied to limited focus “Narrow AI” technologies, like the ones that power autonomous vehicles, while he’s talking about “General AI,” or an artificial intelligence that can do most anything.
Here’s a video clip with a compilation of some of Musk’s warnings on the subject over the years:
Elon Musk proposes that governments carefully regulate AI to help mitigate the dangers associated with them, but he also wants to help solve the problem with Neuralink. By having a high-bandwidth interface between our brains and computers, we might have a better chance of competing with or achieving a symbiosis with superintelligent AIs.
The UK Government Is Taking This Deadly Serious
For good reasons, we don’t get much of a peek at what governments are doing and planning when it comes to major threats. Just like a card player doesn’t want other players to see their hand, governments (especially militaries) tend to play things pretty close to the vest (for better or worse). So, when we get a peek at what militaries take seriously, we can assume that there’s likely a lot more going on behind closed doors.
That’s what makes its recent policy document, “Global Britain in a competitive age,” so interesting.
While the document doesn’t directly define “emerging technologies,” it’s pretty clear that AI is an important part of that. In the document’s overview, the British government says:
“Science and technology: we will take a more active approach to building and sustaining strategic advantage through S&T, using it in support of our national goals. We will create the enabling environment for a thriving S&T ecosystem in the UK and extend our international collaboration, ensuring that the UK’s successful research base translates into influence over the critical and emerging technologies that are central to geopolitical competition and our future prosperity. We will adopt an own-collaborate-access framework to guide government activity in priority areas of S&T, such as AI, quantum technologies and engineering biology.” (emphasis added)
With regard to nuclear weapons, the policy document says:
“The UK will not use, or threaten to use, nuclear weapons against any non-nuclear weapon state party to the Treaty on the Non-Proliferation of Nuclear Weapons 1968 (NPT). This assurance does not apply to any state in material breach of those non-proliferation
obligations. However, we reserve the right to review this assurance if the future threat of weapons of mass destruction, such as chemical and biological capabilities, or emerging technologies that could have a comparable impact, makes it necessary.”
In other words, they don’t want to be evil with nuclear weapons and attack or threaten to attack anyone who didn’t threaten them with nukes. However, they acknowledge here that future technologies (like AI, see above) could rise to become as big of a threat as nuclear weapons. If that happens, and a technology becomes as big a threat to the UK as nuclear weapons, they’ll do what it takes to defend themselves, even if it requires that they use nuclear weapons.
More importantly, though, the fact that they acknowledge that AI (and other emerging tech) could become as dangerous and impactful as nuclear weapons says a lot about the British military’s thinking.
It also appears that they are taking other cues from Musk. In the document’s discussion of diplomacy, they said:
“Regulatory diplomacy: bringing together governments, standards bodies and industry to influence rules, norms and standards – particularly in rapidly evolving areas such as space, cyberspace, emerging technologies and data. We will work with a wide range of partners – including technology companies, independent standards bodies, civil society and academia – across an increasing number of specialised international institutions.”
They intend to work with global regulators on regulating “emerging technologies,” and want to work together with international entities of all kinds (including technology companies like Neuralink) to develop and influence the rules that govern them.
While the British government doesn’t have the influence it once did as a global empire, they still aren’t some crazy, dysfunctional, and uninfluential government* that people don’t take seriously. It is still a real global player and is quite serious about the things it says.
If they say that AI could prove to be a risk that’s on par with nuclear weapons, and that companies and government should work together to mitigate that threat, we should take that very seriously, too.
Featured image: A nuclear weapons test, US DOD (Public Domain)
*(Just to be clear, they can be crazy and dysfunctional, but there are far crazy and more dysfunctional governments in the world)
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
CleanTechnica Holiday Wish Book
Our Latest EVObsession Video
CleanTechnica uses affiliate links. See our policy here.