Cultivating talent at an individual level, fostering a strong culture at a team level, and strengthening governance at the organizational level. To remain secure in today’s AI-driven software development landscape, these are the three critical areas organizations must prioritise.
That was the key takeaway from our powerhouse panel during the roundtable discussion, “Closing the Security Gap in AI,” featuring:
AI is no longer just a buzzword – it is now a core part of the modern developer’s toolkit. But as innovation accelerates, a disconnect is emerging between developers and security. AI is introducing risks faster than organizations can respond, leaving them overexposed and underprepared.
In the rush to adopt AI, many teams are jumping on bandwagons that promise speed and efficiency, such as vibe coding. But there’s a hidden cost: the erosion of developer growth.
Great engineers aren’t born writing perfect code - they’re forged through the messy, trial and error process of solving real problems. When AI automates away those learning moments, junior developers lose the chance to build the skills they’ll need to become tomorrow’s senior engineers.
As Dustin Lehr warned:
“The talent valley is coming.”
To avoid it, organizations must:
Security isn’t just a checklist - it’s a mindset. That mindset is shaped far more by team culture than top-down mandates. Successful culture change hinges on finding the right way to influence people. Merely asking developers not to input proprietary data into an AI assistant is unlikely to sway their behaviour, but incentivizing them not to do so can drive lasting, meaningful change.
When security is embedded into daily workflows and championed by peers, it becomes second nature, but when it’s seen as a hindrance, it is often bypassed.
As Dustin Lehr put it:
“Culture eats strategy and policy for breakfast.”
To build a security-first culture:
Too often, security policies are written in isolation and without input from the developers who are expected to follow them. This incompatibility leads to frustration and the emergence of shadow AI practices.
Developers understand their tools, environments, and workflows better than anyone. If they’re not part of the policy-making process, policies will fail to reflect reality. AI tools are too powerful to be sidelined; therefore, the solution isn’t restriction, it’s collaboration. By involving developers in shaping policies, organizations can ensure safe, responsible use of AI while preserving productivity and innovation.
Mike Burch explained:
“Take a step back and consider: what are we doing right now?”
To close the governance gap:
AI is transforming software development, but without a parallel evolution in security, the risks will outweigh the rewards. By investing in talent, stimulating a culture of shared responsibility, and building governance that reflects reality, organizations can close the security gap and unlock AI’s full potential, safely.