AI "Dangers" Are Often Just Bad Design
Over the past month, I've been diving into these "AI Danger" stories, and I'm noticing a trend: they often lack industry experience and context.
My short take: If AI has a bad impact, it's due to bad design, not bad AI.
Video that pushed me to post this:
https://www.youtube.com/watch?v=jITKnb0tYaM
Example from his video about a Water Treatment Plant
Johnny talks about a sensor being bumped, leading to bad water being sent to a town. Here's the source he cites:
The article he cited actually explains how AI is not about replacing humans but rather supporting them—contradicting Johnny's warning tale.
The transition to AI-driven utilities is about augmenting human expertise, not replacing it.
I'm not saying the problem couldn't happen, but that's not bad AI—that's bad design. Even without AI, sensors and systems running on software make decisions based on set points, and if those fail, redundancy measures should be in place.
Here's some prior art:
- Sensor failure: https://youtu.be/zRM2AnwNY20?feature=shared&t=463
- Boeing 737 MAX redundancy failure: https://www.seattletimes.com/business/boeing-aerospace/a-lack-of-redundancies-on-737-max-system-has-baffled-even-those-who-worked-on-the-jet/
I work in AI and develop AI implementations for companies and governments as a consultant. Not one project I've been involved with has AI making decisions autonomously. AI is used to augment human decision-making, but we always rely on humans to validate the AI (we call this "human in the loop").
I'm not an expert in water treatment development, but I am an expert in integrating AI into systems. If a water plant were making AI-driven decisions without redundant checks or human confirmation, that wouldn't be an "AI danger"—that would be bad design.
Also, I think we're conflating "computers doing a thing" with "AI" a bit here, but that's another discussion entirely.
What Can Happen with AI...
One real concern with AI—especially LLMs—is that as their output quality improves, we risk developing a false sense of security.
This is similar to using driver-assist tools on the freeway: once they work well enough, people trust them too much.
But that doesn't eliminate the need for human oversight.
I've already seen code quality issues from LLM-generated code:
https://www.techrepublic.com/article/ai-generated-code-outages/
But as the article points out, the issue is in the review process.
I've also had interns write bad code that we caught and corrected—yet I don’t see many "dangers of interns" articles or videos.
Developers still need to be held accountable for the code they submit. When AI tools are used irresponsibly and review processes fail, that’s when bad code makes it into production.
"Autonomous Decision-Making"—A Scary Phrase Without Context
I came across a phrase today that I suspect, without context, might be fueling some of these fear-driven articles:
"Autonomous decision-making"
Without context, this sounds scary.
Typically, when we talk about AI making decisions, we mean giving it a set of tools and allowing it to determine which tool to use based on context.
For example, an AI system might:
- Search YouTube when prompted: "Find me videos about cheese curds."
- Query a database when asked: "What was the company's revenue last quarter?"
These are called "tools," and in AI architectures, multiple AI components may decide which tool to use—hence the term "autonomous decision-making."
Now, this could extend into allowing an AI to perform real-world actions.
These "tools" could include controlling chemical mixtures in a water treatment plant—but that would have to be explicitly engineered that way.
It’s possible, but highly unlikely.
All systems are engineered, and the higher the risk, the more stringent the review process should be.
The Boeing 737 MAX redundancy failure is a prime example of what happens when this process breaks down.
It's not entirely out of the question that AI could cause issues, but if it does, the root cause will likely be bad design, not bad AI.