Published on August 14th, 2017 | by Steve Hanley0
Artificial Intelligence More Dangerous Than North Korea, Elon Musk Tweets
August 14th, 2017 by Steve Hanley
We would expect Elon Musk to be a champion of artificial intelligence. After all, it is the cornerstone of the autonomous driving system known as Autopilot that is featured in Tesla automobiles. But he has been warning about the potential dangers of AI since 2014, when he called it the “biggest existential threat” to humanity ever known. How can someone be a champion of new technology he finds so potentially dangerous? Easy — Musk is not constrained by conventional thinking. His ability to see not only both sides of a coin but also the edge and what’s inside is legendary.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
Musk Calls For AI Regulation
In 2015, Musk co-founded OpenAI, whose mission is to develop artificial intelligence “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” His involvement in AI research is partly to “keep an eye on what’s going on” in the field.
Last month, during a wide-ranging presentation to the National Governor’s Association, Musk called for more government regulation of artificial intelligence “before it’s too late.” Last week, OpenAI beat all human competitors at an international competition for the multiplayer online battle arena game Dota 2. The OpenAI program was able to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory. That means the OpenAI entrant was simply better, not just faster, than the best human players. After the victory, Musk tweeted:
Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.
— Elon Musk (@elonmusk) August 12, 2017
Asked how one regulates something that can be done by a single hacker operating at midnight from a corner of his mother’s basement, Musk replied, “It is far too complex for that. Requires a team of the world’s best AI researchers with massive computing resources.”
Is AI A Threat?
The public was exposed to the potential for abuse by artificial intelligence in the 2002 movie Minority Report based on a novel by Philip K. Dick. That film was supposedly set in the the year 2054 (the same year in which all cars will be electric, supposedly), but the predictive power of the game-playing software from OpenAI is already a part of American culture.
In 2004, tech guru Peter Thiel, who once was associated with Elon Musk during their work on PayPal, formed Palantir together with Nathan Gettings, Joe Lonsdale, Stephen Cohen, and Alex Karp. Lord of the Rings fans may remember that the palantir was a “seeing stone” that gave Saruman the ability to see in darkness or blinding light.
The literal definition of “palantir” means “one that sees from afar,” or a mythical instrument of omnipotence. Others may associate a palantir with the mysterious, all-seeing television screens that featured prominently in George Orwell’s 1984. Others may see a link to Jeremy Bentham’s Panopticon, an idea he promoted near the end of the 18th century.
According to The Guardian, “Palantir watches everything you do and predicts what you will do next in order to stop it. As of 2013, its client list included the CIA, the FBI, the NSA, the Center for Disease Control, the Marine Corps, the Air Force, Special Operations Command, West Point, and the IRS. Up to 50% of its business is with the public sector. In-Q-Tel, the CIA’s venture arm, was an early investor.” The other 50% of its business is with Wall Street hedge funds and investment banks.
Predicting Human Behavior
Palantir is working closely with police in Los Angeles and Chicago. Some may applaud its ability to predict which people represent a danger to society, but critics contend that such data-mining techniques only reinforce stereotypes that some segments of law enforcement already have, especially when it comes to black males. An officer who comes into contact with someone that Palantir has labeled as a threat is likely to behave differently than if that person has not been pre-targeted by an algorithm.
Even more troubling is a firm known as Cambridge Analytica, formed in 2013 specifically to use data mining to influence elections. Its head, Robert Mercer, and Peter Thiel are both staunch supporters of Donald Trump. Both organizations operate in extreme secrecy bordering on paranoia. It is not a stretch of the imagination to suggest that Cambridge Analytica may have had as much influence on the results of the last presidential election as the alleged Russian interference in the campaign.
Certainly, organizations that exert so much control over federal, state, and local governments cry out for oversight. Musk’s call for more regulation of AI will be vehemently opposed by Palantir, Cambridge Analytica, and any other companies looking to make a buck from compiling data and selling their conclusions to those in power.
Musk’s concerns were manifested in the movie I, Robot, starring Will Smith, in which a cyborg imbued with artificial intelligence seeks to take over control of society. It is only a movie, of course, but it raised disturbing questions about what the future may hold as machines become ever more sophisticated.
Recently, Musk got into a public spat with Facebook founder Mark Zuckerberg over the dangers of AI. Musk characterized Zuckerberg as someone with “limited” understanding of the subject after Zuckerberg accused Musk of scaremongering about the dangers of artificial intelligence. But when it comes down to the nitty gritty and society needs authoritative, practical advice about AI, who you gonna call — Musk or Zuckerberg? Exactly.
Follow CleanTechnica on Google News.
It will make you happy & help you live in peace for the rest of your life.