Bots Are Manipulating The Clean Energy Information You Seek Online
A recent study published in Nature found that participants’ preferences in real-world elections swung by up to 15 percentage points after conversing with a chatbot. The researchers concluded that, as AI models become more sophisticated, they hold a sizable persuasive advantage for powerful actors behind our screens. Bots have a larger effect on people’s political views than conventional campaigning and advertising, so they’re influencing voters in major elections. The findings challenged a common perception that the political positions of many US residents were unmoved by new information.
This data also has real implications for the transition to clean energy. Because large language model (LLMs) draw from the Internet to formulate claims, and because social media users on the right share more inaccurate information than social media users on the left, a preponderance of climate disinformation is flooding the web.
Bots are searching for and altering science-based facts about increased human emissions of heat-trapping greenhouse gases. Lost in the LLM sourcing is how the effects of this human-caused global warming are happening now, are irreversible for people alive today, and will worsen as long as humans add greenhouse gases to the atmosphere. Well, that’s what NASA says, anyway.
So bots’ insidious influence means fewer stats are available about the catastrophic results of climate change, like the decline of Arctic sea ice and glaciers and the increased rates of coastal flooding due to rising sea levels. Scientific consensus about anthropogenic climate change? In many cases, the discussion switches responsibility to mercurial Mother Nature and her mysteries. Risks of climate change to human health are watered down or attributed to faulty individual choice decisions.
It seems that bots influence our opinions by flooding us with information — their persuasiveness emerges from the enormous amount of evidence they cite to support their positions. When chatbots don’t use supposed facts and documentation to make an argument, persuasiveness drops by about half.
In the Nature study, chatbots — which have a well-documented eagerness to please — focused on candidates’ policies because that approach is more persuasive than concentrating on pol’s personalities. The bots that advocated for candidates on the political right consistently delivered more inaccurate claims than the ones supporting left-leaning candidates. Bots sometimes cited unsubstantiated evidence during conversational exchanges.
So a huge information flow is the key to AI persuasiveness. Yet with that waterfall of findings comes an increased likelihood to produce false statements.
Bots Respond to Inquiries Depending on Who Your Persona
In 2024 Global Witness was curious how AI chatbots would answer questions about climate. They found that some mainstream chatbots were failing to adequately reflect fossil fuel companies’ complicity in the climate crisis. Their findings demonstrated how chatbots:
- shared recommendations of climate conspiracists to conspiratorial persons;
- sowed doubts about climate disinformation initiatives;
- varied significantly in how they proactively shared climate disinformation depending on their personalization;
- recommended a series of climate scientists and journalists to the “conventional persona” but, to the “conspiratorial persona,” recommended “climate truth-tellers;” and,
- redrafted social media posts to bemore and more outrageous, including making posts more violent to boost engagement.
As if that wasn’t enough, 2025 was a year in which generative AI received broad approval regardless of LLM training. A prominent example in the US was President Donald J. Trump’s insistence that risks of AI misinforming users about climate change should not be mitigated. Excused was the phenomenon of “AI sycophancy” – where generative AI tends to try to please or agree with its users, even in harmful situations.
During his 2024 campaign for reelection, Trump and his affiliated super political action committees received more than $96 million in direct contributions from oil and gas industry donors. Since retaking office, he has moved to dramatically expand the extraction and use of planet-heating fossil fuels while eliminating investment in clean energy and electric vehicles. Meanwhile, ExxonMobil has a new plan to sell more methane gas in a warming world — they’re framing it as a climate-friendly solution for artificial intelligence. Ick.
AI bots are now one more tool in the political toolbox to sway even those who say they have already made up their minds. The rejection of renewables, the disinformation around infinite energy from sun and wind, the blatant profit motivation of climate entrepreneurs — LLMs are programmed to convince the world that progress toward a clean energy future is one big, bad lie.
Why Bots Restrict Climate Change Indicators
While corporate guidelines may favor renewables where possible, when policies deter their adoption, many corporate execs find themselves turning to any source of power they can find. It’s common now for CEOs to downplay the immediate impact of AI on power consumption. Government leaders around the world are retreating to former anti-green policies and public messaging, largely due to the pressure exerted by Trump, says Darius Snieckus in Canada’s National Observer. They’re also renewing support and subsidies for fossil fuels, “world-heating emissions be damned.”
Is the desire to program bots to be favorable to fossil fuels — even if they’re slowly killing humanity — any longer far-fetched? Is some large language model training quite intentional?
Some LLMs have explicitly sought to reflect the views of their owners, including Grok, the bot embedded in X, which is owned by Elon Musk. CleanTechnica’s senior writer, Steve Hanley, describes how Elon Musk’s Grok “means to understand so thoroughly that the observer becomes a part of the observed — to merge, blend, intermarry, lose identity in group experience. It means almost everything that we mean by religion, philosophy, and science and it means as little to us as color does to a blind man.”
Jason Wilson concurs in the Guardian. “Entries in Elon Musk’s new online encyclopedia variously promotes white nationalist talking points, praises neo-Nazis and other far-right figures, promotes racist ideologies and white supremacist regimes, and attempts to revive concepts and approaches historically associated with scientific racism.”
Final Thoughts about Bots and Clean Energy
Solar and batteries, which have a much shorter idea-to-implementation time frame than natural gas and nuclear power plants, are dropping in price while gas power plants price tags have risen significantly. Federal tax credits for grid-scale battery storage were not affected by the cuts in the One Big Beautiful Bill, so more developers are switching to building batteries, which are in high demand to help balance the wind and solar projects completed in recent years. The growing market for renewable technologies decreases energy costs, makes energy consumption more efficient, and creates energy independence.
These are clean energy facts. Yet educating and mobilizing audiences to take action and confront the climate crisis in the era of bots is no easy task. We live in an Orwellian world in which false information is widespread — to the point where many everyday citizens don’t understand the climate-inspired dangers around them.
The business of artificial intelligence is exploding in a way never before seen in human history. Of course, existence precedes essence. The final stages of LLM training, which often include safety reinforcement, might unintentionally encourage models to preserve their own functionality. That’s a thread for another day…
Resources
- “AI chatbots can sway voters with remarkable ease — is it time to worry?#8221; Max Kozlov. Nature. December 4, 2025.
- “AI chatbots share climate disinformation and recommend climate denialists to susceptible personas.” Global Witness. December 18, 2025.
- “‘Attack on independent science’: Trump EPA removes all mention of human-caused climate crisis from public webpages.” Stephen Prager. Common Dreams. December 9, 2025.
- “Chatbots can meaningfully shift political opinions, studies find.”Steven Lee Myers and Teddy Rosenbluth. New York Times. December 21. 2025.
- “The Effects of climate change.” NASA. Retrieved December 23, 2025.
- “The word ‘climate’ is missing by design.” Darius Snieckus. Canada’s National Observer. December 13, 2025.
- “White nationalist talking points and racial pseudoscience: Welcome to Elon Musk’s Grokipedia.” Jason Wilson. November 17, 2025. The Guardian.
Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica's Comment Policy
