Published on
THIS ARTICLE WAS WRITTEN BY JOHN CAMPBELL FOR DEVOPSDIGEST.COM.
Artificial intelligence (AI) remains a transformative force in organizations, providing decision-makers with an efficient and cost-effective way to enhance daily operations and drive business growth. This disruptive technology is making waves across all business sectors, but its influence is especially pronounced in software and product development. Developers are leveraging AI to accelerate the software development lifecycle, enabling them to automate repetitive coding tasks and generate substantial amounts of code in a fraction of the usual time.
However, despite the numerous production advantages that AI has brought to organizations, it has simultaneously made it easier for less skilled hackers to infiltrate company systems with AI malicious code. This increased accessibility has drastically heightened security risks, requiring developers — who find themselves at the forefront of corporate innovation and responsibility — to fully understand the evolving security threats and know how to identify and "sniff out" insecure code. The need for this knowledge is more pressing than ever, as recent studies show that AI-driven attacks have affected 87% of organizations worldwide.
Developers play a pivotal role in designing and maintaining systems that are secure, ethical, and resilient. While AI is an incredible assistant, it is the developer who ensures systems are built with integrity and aligned with human values.
The Right Education Empowers Developers
Just 1 in 5 organizations are confident in their ability to detect a vulnerability before an application is released, meaning that the security knowledge in most development lifecycles is insufficient — in fact the number of developers who are actually taught how to code securely whilst in education is minimal. None of the top 50 undergraduate computer science programs in the US require it for majors.
Developers must adopt the principle of "trust no-one, verify everything" which requires a thorough understanding of AI-generated code and the tools they use to proactively interrogate vulnerabilities, validate source code pre-deployment and leverage AI responsibly.
This requires the right education and ongoing contextual based learning surrounding secure by design principles, common vulnerabilities and secure coding practices. Developers' secure code knowledge must be consistently updated and reinforced given the rapid technological evolution of AI so that they can continually stay one step ahead of the latest threats.
This approach also helps developers understand the ethical implications of AI and equips them to question biases and consider the broader societal impact of the technologies they create. Without this depth of education, AppSec and security teams are left with an unnecessary burden of security, which will only ultimately require more time, spend, and potential for business risk.
Tailored and Measurable Knowledge
Surface-level coding knowledge is insufficient if developers want to write code securely and there is no one-size-fits-all model. Training must go beyond the basics, be tailored to specific organizations and their daily operations and relevant to a developer's specific role and language they use every day. Hands-on practice spotting vulnerabilities in code and writing code securely also bridges the gap between theory and real-world application. By doing this, developers are more likely to embed vital architectural and technological knowledge, leading to more confident decision-making and applications that are hardened against attacks.
Developers' ability to write secure code and detect flaws should always be measured because of the potential damaging impact from malicious AI, and it is important to gather information that can be used to measure success. For example, one way to do this might be comparing the number of vulnerabilities present in a developer's code before and after training. Or, identifying the number of vulnerabilities that a developer can detect and fix. This information could highlight whether the developer is improving and ensures they stay engaged with any training.
We will most likely see a significant number of GenAI projects being abandoned after proof of concept by the end of 2025 due to inadequate risk control. However by taking the necessary steps to foster and maintain fundamental security principles through continuous security training and education, it is possible for development teams to successfully balance risk and reward, ensuring the secure deployment of AI in development and beyond.