Some stories just beg to be written at a particular time. At midnight on Friday, the US Congress went into full-fledged meltdown mode. Despite having complete control of the House, the Senate, and the presidency, Republicans were unable to agree on how to keep the government functioning one minute longer, and so they shut it down. The recriminations and finger pointing have already begun, but the interesting bit is that new research indicates machines imbued with AI (artificial intelligence) may be better than humans at reaching compromises that benefit all stakeholders.
The research was conducted by Brigham Young University professors Jacob Crandall and Michael Goodrich, with input from other academics at other international universities, including MIT. The results, published in Nature Communications on January 16, show that machines can learn how to compromise and cooperate more effectively than humans at times. In the introduction to their report, the authors write:
The emergence of driverless cars, autonomous trading algorithms, and autonomous drone technologies highlight a larger trend in which artificial intelligence is enabling machines to autonomously carry out complex tasks on behalf of their human stakeholders. To effectively represent their stakeholders in many tasks, these autonomous machines must interact with other people and machines that do not fully share the same goals and preferences.
While the majority of AI milestones have focused on developing human-level wherewithal to compete with people or to interact with people as teammates that share a common goal, many scenarios in which AI must interact with people and other machines are neither zero-sum nor common-interest interactions. As such, AI must also have the ability to cooperate even in the midst of conflicting interests and threats of being exploited. Our goal in this paper is to better understand how to build AI algorithms that cooperate with people and other machines at levels that rival human cooperation in arbitrary long term relationships modeled as repeated stochastic games
Professor Crandall says: “The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills. AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
The researchers developed an algorithm they call S#. Then they used machines programmed with that algorithm in a series of two player games to see how good they would be at cooperating. The games involved machine-to-machine, machine-to-human, and human-to-human interactions. More often than not, the machines did a better job of finding compromises that benefited both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall says. “As it is, about half of the humans lied at some point. So, essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”
Could machines teach us how to be better humans? Crandall thinks so. “In society, relationships break down all the time,” he says. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.” Does this raise the possibility that machines could do a better job of governing? That is certainly an intriguing question. The most recent actions of the US Congress suggest that machines could not possibly do worse.
An interesting twist to this research is what some might call “the Trump factor.” The AI bots did not have Twitter accounts they could use to send out tweets designed to humiliate opponents, but they were given the ability to employ trash talk in their deliberations. During the tests, they could reward their human counterparts with phrases like “Sweet. We are getting rich!” or “I accept your last proposal,” if the humans cooperated.
But if the human actors attempted a betrayal or backed out of a deal, they could respond with things like “Curse you!,” “You will pay for that!” or “In your face!” Interestingly, whether the machines were playing against each other or with a human, employing trash talk doubled the amount of cooperation in the games. So, apparently, insulting your opponent by telling them they come from a “shithole country” may actually be an effective negotiating strategy. Who knew?
Professor Crandall says he hopes the research will have long-term implications for interpersonal relationships between humans. He’s probably right, but what those implications are remains to be seen. Certainly, the existential challenges created as climate change alters the habitability of certain regions of the world will require an unprecedented level of cooperation among those affected. Perhaps saving humanity from itself is too important a task to be left to humans?
A world government composed of machines programmed with the S# algorithm could be just what the earth needs to survive. Replacing Congress with bots might provide useful insights into how such a system might work. In the interests of making America great again — or at least not acting like a prepubescent teenager — that experiment should begin as soon as possible.
Don't want to miss a cleantech story? Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.