Security Journey Blog

AI in Software Development: How Talent, Culture, & Governance Can Close the Security Gap in AI

Written by Security Journey/HackEDU Team | Jul 29, 2025 1:00:00 PM

Cultivating talent at an individual level, fostering a strong culture at a team level, and strengthening governance at the organizational level. To remain secure in today’s AI-driven software development landscape, these are the three critical areas organizations must prioritise.

That was the key takeaway from our powerhouse panel during the roundtable discussion, “Closing the Security Gap in AI,” featuring:

  • Chris Lindsey, Field CTO, OX Security
  • Gavin Klondike, Principal Security Consultant, AI Researcher and Educator, Netsec Explained
  • Dustin Lehr, Director of AppSec Advocacy, Security Journey
  • Mike Burch, Director of Application Security, Security Journey
  • Pete Goldin (Moderator), Editor and Publisher, DevOps Digest

AI is no longer just a buzzword – it is now a core part of the modern developer’s toolkit. But as innovation accelerates, a disconnect is emerging between developers and security. AI is introducing risks faster than organizations can respond, leaving them overexposed and underprepared.

Talent: Don’t Automate Away Your Future Experts

In the rush to adopt AI, many teams are jumping on bandwagons that promise speed and efficiency, such as vibe coding. But there’s a hidden cost: the erosion of developer growth.

Great engineers aren’t born writing perfect code - they’re forged through the messy, trial and error process of solving real problems. When AI automates away those learning moments, junior developers lose the chance to build the skills they’ll need to become tomorrow’s senior engineers.

As Dustin Lehr warned:

The talent valley is coming.”

To avoid it, organizations must:

  • Treat LLMs like any untrusted component in threat modelling, testing and documenting AI-generated code
  • Embed AI-specific risks into developer training
  • Empower junior developers to use AI as a learning tool, not a crutch

Culture: Make Security a Shared Value, Not a Slogan

Security isn’t just a checklist - it’s a mindset. That mindset is shaped far more by team culture than top-down mandates. Successful culture change hinges on finding the right way to influence people. Merely asking developers not to input proprietary data into an AI assistant is unlikely to sway their behaviour, but incentivizing them not to do so can drive lasting, meaningful change.

When security is embedded into daily workflows and championed by peers, it becomes second nature, but when it’s seen as a hindrance, it is often bypassed.

As Dustin Lehr put it:

Culture eats strategy and policy for breakfast.”

To build a security-first culture:

  • Make the secure path the easiest path
  • Launch peer-led initiatives such as security champions programs
  • Bake secure defaults into tools and workflows

Governance: Stop Writing Policy in the Dark

Too often, security policies are written in isolation and without input from the developers who are expected to follow them. This incompatibility leads to frustration and the emergence of shadow AI practices.

Developers understand their tools, environments, and workflows better than anyone. If they’re not part of the policy-making process, policies will fail to reflect reality. AI tools are too powerful to be sidelined; therefore, the solution isn’t restriction, it’s collaboration. By involving developers in shaping policies, organizations can ensure safe, responsible use of AI while preserving productivity and innovation.

Mike Burch explained:

Take a step back and consider: what are we doing right now?”

To close the governance gap:

  • Involve developers and security teams early in policy creation
  • Align governance with real-world workflows
  • Focus on enabling secure, responsible AI use - not just obstruction

The Bottom Line

AI is transforming software development, but without a parallel evolution in security, the risks will outweigh the rewards. By investing in talent, stimulating a culture of shared responsibility, and building governance that reflects reality, organizations can close the security gap and unlock AI’s full potential, safely.