As LLMs get embedded deeper into developer workflows- writing code, generating configs, summarizing incidents, teams are moving fast to adopt AI without fully understanding what it breaks. The tools feel great at first, but behind the productivity boost lies a growing set of blind spots. These are being overlooked by most development teams. Their effectiveness is becoming more apparent.
The security surface is changing. Old attack patterns are resurfacing in new form. And most teams are shipping AI features without the right architecture, oversight, or even a basic threat model.
This episode of the AI x DevOps podcast tackles these issues head-on. Rohit, Co-founder of Facets, spoke with Nathan Hamiel, Director of Research at Kudelski Security and the AI/ML track lead at Black Hat. With decades in offensive security and recent focus on AI threats in production systems, Nathan shared a grounded, technical view of how AI is actually performing and where itâs quietly introducing risk.
What youâll learn:
-
Where AI-generated code adds overhead instead of saving time
-
How prompt injection works and why it's difficult to fix at the model level
-
How to incorporate LLMs into security tooling without breaking trust boundaries
-
âWhy architectural decisions matter more than model accuracy in production setups
Key Takeaways:
- AI shifts development effort downstream; saved time in coding often leads to more time in integration and debugging.
- Prompt injection remains a core vulnerability.
- Most vulnerabilities in AI tools stem from poor containment, missing guardrails, or overexposed data.
- Using LLMs to augment deterministic tools like fuzzers or static analysis works well, when scoped carefully.
- âRBAC, execution isolation, and sandboxing are critical for AI agents that trigger actions or access internal systems.
- Risk must be evaluated per use case.
Building with AI? Start with a Secure Mindset
Weâre in a phase of AI euphoria. LLMs are being woven into products at record speedâwriting code, handling data pipelines, powering agents. The results are flashy. But behind the scenes, weâre shipping old problems at new velocity.
âWeâre seeing a monumental number of old school vulnerabilities coming back into production because of AI development.â â Nathan Hamiel, Director of Research, Kudelski Security"
This post isnât about future AI safety or speculation about AGI. Itâs about whatâs happening now- how security is quietly regressing in the rush to integrate AI.
1. Same Flaws, Shipped Faster
The most common issues weâre encountering arenât new. Theyâre not even unique to AI. But the speed and volume of AI-assisted development are resurfacing conventional vulnerabilities in bulk.
Security is about discipline. LLMs donât replace that. If anything, they require more of it because the illusion of productivity can mask whatâs going wrong underneath.
2. AI-Native Risks Still Matter
Prompt Injection
âSource: https://www.promptfoo.dev/blog/prompt-injection/
The flagship example: users embedding instructions in input that override system behaviour. Itâs hard to detect, harder to defend, and totally alien to traditional application threat models.
Hallucinated Dependencies
LLMs can invent package names. Malicious actors can register them. Itâs not widespread, but itâs a new class of exploit that didnât exist before.
These risks donât replace traditional ones but compound them.
3. Architect for Failure, Not the Demo
Many of the worst vulnerabilities Nathan's team found werenât clever exploits. They were architectural oversights:
-
Secrets stored alongside agents
-
Unrestricted internet access
-
No validation of what code gets executed
The antidote is architectural discipline. At Facets, weâve leaned into a deterministic pipeline model where the AI can assist in generating outputsâbut the actual execution path remains reviewable, predictable, and safe.
âWe donât let AI make production decisions. It generates outputs that pass through a deterministic pipeline with validations and context-aware safeguards.â â Rohit Raveendran"
This approach doesnât remove all risk; but it contains it. And thatâs the point.
4. Hallucinated Data: A Silent Threat
One overlooked risk isnât in the code. Itâs in the data. AI-generated output is already leaking into public datasets. Mistakes, inaccuracies, half-truthsâall of it gets scraped and recycled.
âThe hallucinated data of today may end up being the facts of tomorrow.â â Nathan Hamiel"
Once itâs embedded in training data, it becomes harder to correct. Garbage in, model out. âRead Nathan's blogâ
5. No, Weâre Not Seeing Exponential Progress
Thereâs a pervasive belief that AI progress is exponential. The tooling feels impressiveâuntil you scratch beneath the surface.
Each model iteration feels magical at first. But deeper use reveals familiar problems: fragility, unreliability, bloated output, shallow reasoning.
The takeaway: be practical. Ground your decisions in what the model can do, not what the market says it might someday do.
Beyond the Hype Cycle: A Pragmatic View of AI's Future
This blog only scratches the surface of the conversation. In the full episode, Nathan Hamiel dives deeper into:
-
Why model hallucinations are a long-term reliability risk
-
The growing security blind spots in MCPs and AI agent stacks
-
How teams can use existing DevSecOps workflows to safely scale LLM adoption
-
Why âsafe enoughâ is not a valid approach when AI output affects production systems
âśď¸ Listen now on [Spotify], [Apple Podcasts], [YouTube] or wherever you get your podcasts.
And if you found the episode useful, share it with your team, especially your security and platform engineers.â