As we celebrate National Coding Week, the spotlight for the week is very much on AI, which is revolutionizing industries and driving innovation at an unprecedented pace. While the possibilities for AI seem boundless, we must not overlook the importance of security.
National Coding Week serves as an important reminder of the significance of secure coding practices. It is a time when we must not only celebrate the advancements in software development but also reflect on the long-standing responsibility that comes with developing new services and applications. In the realm of AI, where the lines between human and machine-generated code blur, continuous secure coding training and the need to understand security principles emerge as a critical pillar of our digital future. While tools like ChatGPT and GitHub CoPilot hold the promise of time savings and enhanced productivity, their benefits can rapidly erode if the code they generate is ridden with vulnerabilities and bugs.
The Challenges of AI-Generated Code
Despite the excitement surrounding generative AI, it is crucial to temper our expectations. The application of generative AI, particularly within large language models (LLMs), in software development can be a double-edged sword. While these technologies offer efficiency gains, they have also attracted the attention of malicious actors. Tools like FraudGPT have already surfaced in the cybercriminal underworld, enabling the creation of malware, hacking tools, and convincing phishing emails.
A significant challenge lies in developers placing undue faith in LLMs and failing to adequately assess the code they produce. Stanford University research highlights this concern, demonstrating that developers who utilize AI assistants are “more likely to introduce security vulnerabilities and often rate their insecure code as secure.” This overconfidence in the security of their code presents a significant risk. Similarly, a study from the University of Quebec revealed that ChatGPT frequently generates insecure code. Out of 21 cases examined, only five initially produced secure code, and even after explicit requests to correct the code, ChatGPT addressed the issues correctly in only seven cases. These statistics are far from reassuring for any development team.
The Role of Proper Training
Effective secure coding training programs are essential for all developmental teams, even those relying on generative AI. Continuous education and training ensure that security remains a top priority, even as threat landscapes and technology evolve. These programs should be inclusive, covering all stakeholders in the SDLC, including quality assurance (QA), user experience (UX), project management teams, and developers. New frameworks have been introduced, highlighting the growing recognition of the need for security in the field of AI and highlighting the vulnerabilities associated with AI models, especially large language models. Specifically, the OWASP Top 10 for LLMs is a significant step toward improving the security of AI code. It highlights the importance of addressing vulnerabilities and risks in AI models and emphasizes the need for ongoing security efforts in the field of AI.
Tailored, programs, such as our Diligent Developer Security Awareness and Education Program -that addresses the specific challenges faced by each group is vital. Furthermore, such programs should be incentivizing starting security from the left, at the beginning of software development, and fostering security champions in an organization who can influence others organically.
While AI holds immense potential for revolutionizing software development, it should be approached with caution. The allure of faster coding should not blind development teams to the risks associated with AI-generated code. Continuous training and education are paramount, equipping professionals with the skills needed to code and secure the quality of AI-generated code effectively.