What Do Steve Bannon & Bernie Sanders Have In Common? Opposition To Artificial Intelligence
Support CleanTechnica's work through a Substack subscription or on Stripe.
Steve Bannon and Bernie Sanders are at opposite ends of the political spectrum. It seems there is only one thing they agree on — artificial intelligence is dangerous. A majority of Americans agree. Quinnipiac University, well known for its polling prowess, summarized the results of a recent poll about AI this way: “Americans’ AI Use Increases While Views On It Sour. 7 In 10 Think AI Will Cut Jobs, With Gen Z The Most Pessimistic.” The opening paragraphs to its report put the poll results front and center:
“As artificial intelligence continues to leap from concept to reality in just about everything we do, an increasing number of Americans see more harm than good when it comes to AI’s impact on their daily lives and education and they are divided about its impact on health care.
“Trust in AI remains low. A slight majority say the pace of AI’s development is faster than they expected and there is more concern than excitement about AI. Those concerns are apparent in views related to AI’s use in the workforce, politics, the military, and AI data centers.”
In an interview with the New York Times, Sanders, the fiercely independent senator from Vermont, said: “Given AI and robotics are going to impact every man, woman and child in this country, one might think that there’d be a massive debate in the United States Congress. What does it mean? Where do we go? How do we deal with it? There has been minimal, minimal discussion.” Sharp-eyed readers will note that Congress has yet to perform its constitutionally mandated role with regard to the illegal and thoroughly disastrous war on Iran, so its failure to take on the challenge of AI technology should come as no surprise.
In January, Bannon said on a podcast: “There’s not clarity, there’s not transparency, and there’s certainly not accountability. That’s why you’ve seen not just interest but building anger of working class people.” Davis Ingle, a White House spokesperson, said in a statement: “It is the policy of the Trump administration to sustain American AI dominance to protect our national security and ensure we remain the world’s leading economy.”
That is in line with the leader of the MAGA movement being in love with cage fighting. The coward that shirked his duty to his county during the Vietnam War continues to adopt the posture of a hyper-masculine fighter — a sure sign that he has serious doubts about his own manhood. The prior occupant of the Oval Office actually did something to protect Americans, not just issue some word salad.
The New York Times claims people of every political persuasion are joining forces to oppose artificial intelligence and the data centers that support the technology. They worry is that tech companies are more focused on cashing in on AI than how it will affect ordinary people. They also share a sense that all that money will flow into the hands of Silicon Valley’s ultra-wealthy tech bros while the middle and working classes shoulder the financial burdens.
John Oliver On Artificial Intelligence
This week, John Oliver echoed that theme on his TV show Last Week Tonight. “The explosion of chatbots is no accident, Developing the large language models that power them was a massive investment and companies needed to start showing a return on it.” Artificial intelligence companies command big investments, he said, and they “are anxious for them to start bringing in revenue. And one of the key ways they can do that is to make people keep coming back to chat to the bots, and for longer.”
Oliver referred to one researcher from Meta’s so-called “responsible AI” division who said, “The best way to sustain usage over time, whether number of minutes per session or sessions over time, is to prey on our deepest desires to be seen, to be validated, to be affirmed.”
To which Oliver replied, “If that is already making you feel a bit uneasy, you are not wrong. Because the more you look at chatbots, the more you realize that they were rushed to market with very little consideration for the consequences.” He quoted Character.ai CEO Noam Shazeer, who said that AI “friends” were able to be introduced “really fast” because “it’s just entertainment, it makes things up. That’s a feature. It’s ready for an explosion like, right now, not like in five years when we solve all the problems.”
“It’s already not a great sign that he’s describing untested AI with what sounds like a failed slogan for the Hindenburg,” Oliver quipped. “Because the thing about not waiting until you’ve solved all the problems with your product is you’re then launching a product with a shit-ton of problems.
“In general, it is good to remember that however much an app may sound like a friend, what it is, is a machine,” he said. “And behind that machine is a corporation trying to extract a monthly fee from you. And that kind of sums up for me what is so dystopian about all this, because while that guy you saw earlier said that selling AI friends is low risk because they’re just entertainment, that’s not actually how friends work. Friends can be the most important figures in your life.
“True friends know when to listen, when to gently push back and when to worry about you,” he concluded. “And in hindsight, maybe it was a mistake to let some of the most flamboyantly friendless men on Earth be in charge of designing friends for the rest of us.” Tell it like it is, John!
The Government Opposes State Regulations
Many critics of artificial intelligence say they are far from being Luddites just having a bad reaction to new, scary technology. They believe that people in Washington, especially the so-called president, are protecting Silicon Valley rather than reining it in. They want regulation, or at least a debate, before AI becomes entrenched in American life.
But the administration is suing states who have dared to pass legislation to protect their citizens. Talk about government overreach! This week the administration intervened in a lawsuit filed by Elon Musk’s xAI that is challenging a Colorado law designed to protect citizens of that state from exploitation by AI.
The Justice Department said the law violated the 14th Amendment’s equal protection guarantee by requiring companies to guard against unintended discriminatory effects while allowing some discrimination aimed at promoting diversity. “Laws that require AI companies to infect their products with woke DEI ideology are illegal,” Harmeet Dhillon, the assistant attorney general for civil rights, said in a statement reported by The Guardian.
The White House’s policy framework for artificial intelligence, issued last month, calls on AI providers to protect children. This year, the putative president issued a proclamation that said tech companies “must pay for the full cost of the energy and infrastructure needed to build and operate data centers.”
Known Dangers
Even in the early days of the artificial intelligence phenomenon, industry leaders like Elon Musk, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei frequently warned that AI was a risk to jobs and could have unforeseen, even dangerous consequences. “If this technology goes wrong, it can go quite wrong,” Altman told lawmakers in 2023.
AI’s reputation with the public hasn’t been helped by the social media era that preceded it. Social media, despite its wild popularity, has been criticized for heightening political polarization and worsening mental health. In March, a jury in Los Angeles found Meta and YouTube responsible by a jury for creating an addictive product that harmed a young user. The two companies, which together make more than $50 billion in profit each quarter, were fined $6 million. A jury in a separate trial in New Mexico ordered Meta to pay $375 million in damages for failing to protect young users from sexual predators.
The news that tech companies are slashing their workforces makes AI critics nervous. Despite all the chest thumping about how AI will create jobs, last week Meta said it was cutting 10 percent of its workers, while Microsoft targeted up to 7 percent of its veteran employees in the United States with buyout offers. Nationwide, tech jobs declined by about 150,000 from 2022 through 2025, according to data from the Census Bureau. 2022 to 2025 just happens to parallel the time line for AI development. Coincidence? We don’t think so.
Bribing Politicians Is the AI Industry Model
So far, the industry’s most notable response has been to pour hundreds of millions of dollars into super PACs that target lawmakers who dare to question the value of AI. The industry has also downplayed the backlash as a product of paranoia peddled by so-called “AI doomers,” who worry the technology could destroy humanity, and NIMBYs, or not-in-my-backyard activists.
The antipathy to new ideas is a staple of modern life. There are groups opposed to wind turbines, solar panels, grid-scale battery storage, and electric cars. The term sabotage refers back to factory workers who threw their wooden shoes — known as sabots — into the gears of machinery to protest the mechanization of production. The tobacco companies perfected the art of delaying policies that were a disadvantage to the industry, and others have learned those lessons well. Today, NIMBY has given way to a new acronym — BANANA — which stands for Build Absolutely Nothing Anywhere Near Anyone.
The upshot of all this is that a significant number of people are opposed to AI and they are organizing in new ways to push back against the spread of this new technology. As a result, people of difference political beliefs are finding they have a common interest. At its core, the antipathy to AI is as much about people feeling they are losing control of their personal lives to computers and will be forced to pay for the disruption to their lives. Anti-AI animus is poised to become a political force, one that may impact US elections this coming November.
Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica's Comment Policy
