How “Move 37” Revealed The Perils Of AI — Yuval Noah Harari
Yuval Noah Harari is the author of Sapiens — A Brief History Of Humankind. Harari says, “Homo sapiens rules the world because it is the only animal that can believe in things that exist purely in its own imagination, such as gods, states, money, and human rights.” In an article for The Guardian on August 24, 2024, he delved deeply into the brave new world of AI — shorthand for artificial intelligence — and explained why this new technology, which is suddenly the main topic of conversation around the world, may be more destructive than nuclear weapons. It’s a lesson all of us need to learn.
Harari says the perils of AI were first revealed when AlphaGo, an AI program created by DeepMind to play the ancient game of Go, did something unexpected in 2016. Go is a strategy board game in which two players try to defeat each other by surrounding and capturing territory. Invented in ancient China, the game is far more complex than chess. Consequently, even after computers defeated human world chess champions, experts still believed that computers would never defeat humans at the game of Go. But on Move 37 in the second game against South Korean Go champion Lee Sedol, AlphaGo did something unexpected
“It made no sense,” Mustafa Suleyman, one of the creators of AlphaGo wrote later. “AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake.’ Yet as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
Move 37 & The Future Of AI
Move 37 is important to the AI revolution for two reasons, Harari says. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game. It is a treasured cultural tradition that has existed for more than 2,500 years. Yet AI, being free from the limitations of human minds, discovered and explored previously hidden areas that millions of humans never considered. Second, Move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it. Suleyman wrote, “In AI, the neural networks moving toward autonomy are, at present, not explainable. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence, Harari writes. As AI evolves, it becomes less artificial — in the sense of depending on human designs — and more alien — in that it can operate separate and apart from human input and control. Many people try to measure and even define AI using the metric of “human level intelligence”, and there is a lively debate about when we can expect AI to reach it. This metric is deeply misleading, Harari says, because AI isn’t progressing towards human level intelligence, it is evolving an alien type of intelligence. In the next few decades, AI will probably gain the ability to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could alter the course not just of our species’ history but of the evolution of all life forms.
AI & Democracy
The rise of unfathomable alien intelligence poses a threat to all humans, Harari says, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to function. Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony?
Computers are not yet powerful enough to completely escape our control or destroy human civilization by themselves. As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers but because of our own shortcomings, according to Harari.
A paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. Terrorists might use AI to instigate a global pandemic. What if AI synthesizes a virus that is as deadly as Ebola, as contagious as Covid-19, and as slow acting as HIV? In Harari’s scenartio, by the time the first victims begin to die and the world becomes aware of the danger, most people could already have already been infected.
Weapons Of Social Mass Destruction
Human civilization could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money, and fake humans so that people in numerous other countries lose the ability to trust anything or anyone. Many societies may act responsibly to regulate such usages of AI, but if even a few societies fail to do so, that could be enough to endanger all of humankind. Climate change can devastate countries that adopt excellent environmental regulations because it is a global rather than a national problem. We need to consider how AI might change relations between societies on a global level.
Imagine a situation in the not too distant future when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel, and CEO in your country. Would you still be living in an independent country, or would you now be living in a data colony? What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
It is becoming difficult to access information across what Harari calls the “silicon curtain” that isolates China from the US, or Russia from the EU. Both sides of the silicon curtain are increasingly run on different digital networks, using different computer codes. In China, you cannot use Google or Facebook, and you cannot access Wikipedia. In the US, few people use leading Chinese apps like WeChat. More importantly, the two digital spheres aren’t mirror images of each other. Baidu isn’t the Chinese Google. Alibaba isn’t the Chinese Amazon. They have different goals, different digital architectures, and different impacts on people’s lives. Denying China access to the latest AI technology hampers China in the short term, but pushes it to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest details in the long term.
For centuries, new information technologies fueled the process of globalization and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality. For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon, Harari suggests.
Mutually Assured Destruction
The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger because cyber warfare is inherently different from nuclear warfare. Cyber weapons can bring down a country’s electric grid, inflame a political scandal, or manipulate elections, and do it all stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target. That makes it hard to know if an attack has even occurred or who launched it. The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small. Cyber warfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses, and malware. Nobody can be certain whether their own weapons would actually work when called upon. Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation. Even worse, if one side thinks it has such an opportunity, the temptation to launch a first strike could become irresistible because one never knows how long the window of opportunity will remain open. Game theory posits that the most dangerous situation in an arms race is when one side feels it has an advantage that is in imminent danger of slipping away.
Even if humanity avoids the worst case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better. If the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering and geoengineering.
The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey. Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorize for their history exams. These leaders should be reminded, however, that there is a new alpha predator in the jungle.
The Takeaway
“If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI,” Harari concludes. The results are unpredictable today, when AI is in its infancy, but Harari’s suggestion that we have created alien intelligence, not artificial intelligence, is significant. Humanity already has many examples of new technologies that altered the course of history. Nuclear weapons are a clear example but so are such things like the Boeing 737 Max, whose sophisticated control systems sometimes have a mind of their own that leads them into deadly crashes that kill hundreds of passengers.
Today, walled silos of information already exist. Fox News declined to broadcast the speeches made at the Democratic National Convention, so its viewers have no idea some Republicans openly oppose the candidacy of Donald Trump. Facebook, X, and YouTube use algorithms to steer people toward certain ideological content. Every day we move further away from the real world and toward an alternate reality that exists only in a digital cloud.
The digital technologies that were supposed to move us forward toward a collective human consciousness have instead fractured us into smaller and smaller subgroups. As AI improves, establishing communication between those subgroups may become an impossibility, with dire consequences for humanity — and all because of the consequences of Move 37 in a game of Go in 2016. If the gates of history truly do turn on tiny hinges, Move 37 may well have presaged the fate of the human species.
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one if daily is too frequent.
CleanTechnica's Comment Policy