A Better Way To Look At AI Safety



Last Updated on: 29th July 2025, 02:58 am

Whether we knew it or not, CleanTechnica writers have been writing about AI safety for years. Concerns over robotaxi testing, a woman dying after being hit by an Uber test vehicle, and questions about Tesla’s Autopilot and FSD Beta (now FSD Unsupervised) were the big drivers of news and analysis here.

But, as artificial neural networks continued to improve, many other uses for them came up. Chatbots are the big one, but things like cheaper and more widespread license plate readers that can track your every movement and sell the data to government agencies like ICE are stoking controversy.

Just as with the concerns raised by robotaxi testing, the conversation has made its way to Capitol Hill and to state legislatures. Some laws, like those banning the use of naked deepfake photos of someone for harassment purposes, have gone through. Other efforts to regulate the AI companies themselves have been a lot less successful.

The most recent analysis I’ve read on all of this comes from this podcast over at The Lever, so feel free to check them out and go for a deeper dive on all of this legislative wrangling.

The Most Common Approach To AI Regulation Today, & Why It Isn’t Up To The Task

When people talk about regulating AI, most commentators and legislators talk about regulating AI companies. Specifically, most want to mandate the building of safer AI models, hopefully with the ability to see what the AI is “thinking,” even if that means slowing AI development down.

This approach does have merit. If we have to wait a few more years for this technology, but massively lower the risk of it turning us all into paperclips, that’s a sane and good tradeoff.

This approach of having government watch AI companies to make sure they don’t do anything stupid does have limits, though.

At present, it’s just about impossible to hide a serious AI development effort from governments. Enormous data centers full of computer clusters are kind of big to hide, and their enormous thirst for electrical power makes them easy to spot, as authorities are already looking for illegal drug growing operations with similar footprints.

But we have to keep in mind that calculators used to take up that kind of space and power. Since the earliest electronic computers came out, the size and cost of them has dropped almost unimaginably. Something that once took up a city block is now an app in your phone that takes up no extra physical space at all. And power requirements? Your phone can do more than every computer in the world combined decades ago, and can do that powered almost all day by a tiny battery.

As computer technology improves, the physical and power footprint of potentially dangerous AI experiments will continue to drop. There will come a point where guys living in their mom’s basement will be able to build something capable of mass destruction. Unlike nuclear weapons, we won’t be able to count on the difficulty of obtaining fissile materials (uranium, plutonium).

I’m not arguing that we shouldn’t try to regulate the big players right now, but I do think that we should make this regulatory approach part of a much broader ecosystem of AI safety efforts.

Emergency Management Is A Mature Field We Should Be Consulting More

While the possible AI apocalypse is a novel threat, the deeper idea of a technological disaster is practically ancient. Both researchers and people working in the fields of emergency management and homeland security have been improving approaches to this problem since before the Bhopal Gas Disaster.

When we think of agencies like FEMA, we often focus on what happens after a disaster. We know that (until some recent budget cuts), FEMA teams will sometimes even be on the ground ready to help before anybody needs rescued.

Graphic by the Federal Emergency Management Agency (Public Domain).

What you may not know is that response and recovery efforts are just the tip of the iceberg when it comes to what FEMA, state agencies, and local emergency managers are doing. Working to prevent or mitigate the disaster before it even happens takes a lot of effort, and then preparing for the disasters that can’t reasonably be prevented takes more.

Obviously, we’d rather prevent or mitigate the effect of an AI disaster than deal with the aftermath, so let’s take a quick look at some concepts behind that.

For a disaster to happen, a hazard (something that can hurt people and property) has to come in contact with a vulnerability (people and property who could be harmed by the hazard). For example, a massive landslide that happens where no people live isn’t a disaster at all. But, put a town full of humans and their homes in front of the landslide, and it’s going to make the evening news.

Applying This To The Threat of AI Disaster

Instead of trying to force the AI industry to build safer AI, we should also be looking for other ways to prevent the hazard (a rogue, unaligned AI program) from getting into any human vulnerabilities. That way, even if a rogue AI emerges, it can’t cause a technological disaster.

One solid approach could be to ban the use of AI for critical infrastructure. Power generation, gas lines, water treatment plants, and many other important facilities shouldn’t be controlled by AI programs. Because most of these facilities are regularly inspected, it shouldn’t be extra difficult to detect the unlawful use of AI in those safety-critical systems.

Another approach might be to limit the amount of non-critical resources or the types of resources that it’s lawful to put under the control of an AI program. If an AI program goes rogue, it shouldn’t have enough resources at its disposal to cause a serious problem. We might also be smart to deny AI access to things like dangerous chemicals, explosives, accelerants, and weapons.

I’m sure readers will come up with more ways to mitigate the AI threat even if we can’t guarantee no rogue AI program ever emerges.

With these and other mitigation efforts in place, we can move on to preparedness. This alone could be ten articles, so instead of fully exploring ways to prepare for rogue AI programs, I’ll just leave some scenarios to serve as food for thought and discussion.

  • Do state regulators have the guts to pull a robotaxi or dealer license if the systems are unsafe or not working as advertised?
  • Are schools and large businesses prepared for a future mass shooter who might send in a DIY armored killer robot instead of doing it himself?
  • Are critical infrastructure sites prepared to safely shut down the equipment if an AI-equipped computer virus gets planted in the control systems?
  • Have AI companies made plans for how they’d deal with the situation if artificial general intelligence is unexpectedly achieved?
  • Are cities prepared for a group of hacked robotaxis to speed into a mass gathering event?

These scenarios might seem outlandish today, but the proper time to think about dealing with these possible scenarios is today, not when we’re in the middle of dealing with one.

Featured image: a terminator robot chasing humans. AI-generated for maximum irony and slop.


Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Advertisement
 

Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy


Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero.

Jennifer Sensiba has 2258 posts and counting. See all posts by Jennifer Sensiba