Skip to content

AI AppSec: Can Developers Fight Fire With Fire?

AI AppSec: Can Developers Fight Fire With Fire?

Published on

By Mike Burch, Director of AppSec at Security Journey 

The adoption of AI continues to drive transformation across a number of industries, one of which is software development.  While still in its infancy, the risks associated with generative AI are significant, continuous, and constantly evolving. Software development does not evade these risks. In spite of this, organizations are moving quickly to implement AI application security tools, as Gartner data shows, revealing that 34% of respondents are exploring the use of AI for application security tools. With the UK AI Safety Summit due to take place next month, and the issue of how to regulate AI widely being discussed globally, security needs to be a significant part of any conversations, and included within this, the role of continuous security education and training.  


More Tools Are Not The Answer 

Using AI tools to protect against the risks of AI-generated code is just another example of the tool chasing that has become all too common. Relying on AI to catch AI-generated gaps and code errors is a failure of effective security training.  If generative AI is being used to write code, there’s even more of an impetus for security education and training within development teams due to the shift required from writing code to doing code reviews for something that is AI-generated. In the traditional sense, development teams always carry out code reviews, ensuring everything developed makes sense, and it is always recommended that there is a security checklist to deliver a thorough review. If teams are doing this with AI, you can’t just use another AI tool to check for any gaps, because there are technology limitations. There is a reason that AI missed it in the first place, and it doesn’t have the same level of context that a human is able to have during the code review.  

We cannot afford to underestimate the importance of people in building security in from the start, and secure coding education is vital to ensuring a base level of security. Better training is required so that teams relying on generative AI are more capable of spotting these kinds of mistakes. If done well, it will also arm developers with the knowledge needed to be able to use AI models more effectively. 


Continuous Education  

Continuous education is crucial for maintaining a strong security posture and is the foundation for each practical step organizations need to take to empower their SDLCs. At Security Journey, we offer tailored lessons that cover both offensive and defensive techniques for developers to understand how to identify and fix vulnerable code, including code that could be flawed when generated by AI. Through real-world development experiences, developers can practice breaking, fixing, and testing code within a secure application environment.  

Learn more about our training libraries in supporting security awareness across all areas of the SDLC pipeline.