Skip to content

Closing the Security Gap in AI

Published on

This article ORIGINALLY APPEARED on DEVOPSDIGEST.COM

To examine the growing gap between how software is built and how secure it is, Security Journey brought together a panel of seasoned developers, security leaders, and AI experts for a roundtable discussion on Closing the Security Gap in AI. The panel included:

  • Chris Lindsey, Field CTO at OX Security
  • Gavin Klondike, Principal Security Consultant, AI Researcher, and Educator at Netsec Explained
  • Dustin Lehr, Director of AppSec Advocacy at Security Journey
  • Mike Burch, Director of Application Security at Security Journey
  • Moderator: Pete Goldin, Editor and Publisher of DEVOPSdigest.

Together, they explored the real-world challenges organizations are grappling with when it comes to software development leveraging AI, from fragile governance frameworks and inconsistent policy enforcement to the growing over-reliance on AI generated code. Their conversation also highlighted the responsibility of a developer in closing the security gap, the critical role of a strong security culture in shaping security outcomes and the practical strategies organizations can employ to secure themselves and their code in an increasingly complex digital landscape.

"Take a step back and consider: what are we doing right now?" – Michael B.

In a knee jerk reaction to AI use, organizations often hastily pull policy together and implement it without a clear understanding of how AI is being used internally by developers and the wider business, and what elements must be governed. Without pausing to ask fundamental questions — who is using AI, for what purpose, and with what data — governance becomes disconnected from reality.

When these policies are overly restrictive, they're frequently bypassed and shadow practices — developers using AI tools on personal devices or in unofficial ways — emerge and undermine the very policies meant to protect.

Successful AI governance requires a strategic approach — one that identifies where AI can add real value, integrates it thoughtfully into development and business workflows, and involves both developers and security teams from the outset. When governance is built with awareness and grounded in real-world use cases, it lays the foundation for secure and responsible AI adoption.

"The talent valley is coming." – Dustin L.

Developers are on the front lines of application security and when code is insecure, the responsibility ultimately falls on them. Despite this, many developers, especially juniors, lack the training and support needed to assess the risks of AI-generated code.

Over-reliance on AI can prevent developers at the start of their careers from nurturing the foundational skills that come from trial and error, stalling their growth from junior to senior roles. In a rush to streamline the present, technical expertise is being automated away and the technical resilience of the future weakened.

The solution? Supporting developers in using AI wisely. LLMs must be treated like any untrusted components during threat modelling, requiring testing and documentation for AI-generated code, and AI-specific risks must be embedded into developer training.

"Culture eats strategy and policy for breakfast." – Dustin L.

For developers, their priority is often to make something work and security comes second, especially under tight deadlines. The most effective way to change this behavior is to make the secure path the easiest one and this calls for a cultural change.

Developers need a reason to care about security, and that reason must stem from team culture, not a document. When secure development is reinforced by those who developers trust the most, their peers, it turns into a reality. Community initiatives, like internal clubs, meetups, or security champions programs, are crucial for giving teams ownership over their approach to security and helping normalize it as a shared value.

When it comes to tools, they must be built with secure defaults and security checks that are integrated into existing processes, helping to reduce the friction that makes best practices feel like a burden.

Looking Forward

The world of software development is undergoing a period of rapid change and while the future holds immense promise of efficiency and growth, it also brings a sense of unease. As the initial wave of excitement begins to settle, it is time for a more measured approach. The goal is not to hinder innovation, but to ensure that AI, governance, talent and culture evolve together and keep code secure.