Published on
As AI-powered development tools like GitHub Copilot become more widely adopted, engineering teams are naturally beginning to ask new questions about the future of secure coding education.
If tools like Copilot can automatically flag and fix vulnerabilities, do developers still need to be trained in secure coding practices?
It’s a great question, and one we’re hearing more often in demo conversations with technical leaders. During a recent discussion, a prospect shared that their team is starting to mandate Copilot in code reviews, and they're wondering what role training plays when AI can flag issues like insecure comparisons or dangerous coding patterns in real time.
To help frame the answer, Michael Burch, Director of Application Security at Security Journey, offered insight grounded in both experience and practicality. Michael leads content strategy and education for Security Journey’s training platform, speaks regularly at universities on AI’s impact on development, and co-hosts the Security Champions Podcast.
Vibe Coding ≠ Secure Coding
Michael introduced the team to a term that resonated immediately: “vibe coding.” It describes developers relying heavily on AI to write and ship code without fully understanding what that code is doing.
“Vibe coding is the new low code / no code,” Michael explained. “You’re empowering people to build quickly, but they might not understand the security implications of what they’re building.”
In this context, Copilot and similar tools offer acceleration, but not assurance. Without proper training, a developer might unknowingly introduce vulnerabilities, because the tool generated the code, but the human approved it.
The Tightrope Metaphor: Why Training Still Matters
“You don’t send someone on a tightrope and hope the net catches them. You teach them how to walk the rope the right way. The safety net is just that, a backup.”
Mike explained that secure coding training provides the foundation that enables developers to work confidently with AI-assisted tools. Without that foundation, developers may blindly trust what’s generated and overlook deeper security concerns. That’s when risks escalate.
AI can suggest fixes. Static analyzers and DAST tools can catch vulnerabilities post-deployment. But the best outcome is when the vulnerability never exists in the first place, because a trained developer knows better.
From Code Generation to Code Review
The team discussed a clear industry trend: as AI generates more code, developers are shifting from authors to reviewers. That means code review becomes even more critical, and secure coding training becomes more relevant than ever.
“We’re seeing more code generation and less line-by-line development,” Michael said. “That means developers need to become excellent at reviewing code for correctness and security. AI can’t contextualize the data, business rules, or system architecture the way a human can.”
This is especially true in high-risk industries like healthcare, where sensitive data like PHI is involved. The stakes are high, and tools alone aren’t enough.
Security Journey’s Approach to AI-Augmented Development
Security Journey is actively building for this shift. The platform already includes:
- Hands-on lessons in real sandboxes, where developers exploit and then fix vulnerabilities using the languages they work in.
- Role-based content, tailored to match a developer’s experience level, language, and technology stack.
- A security knowledge assessment, which lets teams benchmark skills and assign training that targets real gaps, saving time and improving relevance.
In response to the rise of Copilot and other generative tools, Security Journey is rolling out a dedicated AI-Augmented Development series, launching later this quarter. This includes:
- Secure coding guidance for teams using AI assistants
- Hands-on lessons with real LLM-powered vulnerabilities
- Future-facing training aligned with the new AI-augmented SDLC
Training + AI + Tooling: A Unified Strategy
One of the most powerful insights from the discussion came from the prospect’s own team:
“Some humans are going to miss some stuff. Some tools are going to miss some stuff. So the real question is: how do we align training with the tools and build a cohesive, efficient strategy?”
It’s a valid point, and one Security Journey helps teams operationalize. Rather than training in a vacuum, organizations can align learning paths with the tools already in use: Copilot, DAST, SCA, and more. This way, developers are equipped with both contextual knowledge and practical support for secure coding at every step of the SDLC.
Final Thoughts: The Human Edge
AI is transforming development workflows. But it hasn’t replaced the need for secure thinking.
“We’re not anti-AI,” Michael emphasizes. “We’re pro-education. The most dangerous thing you can do is give someone a powerful tool they don’t understand.”
The future of development isn’t humans vs. machines, it’s humans and machines working together. And secure coding training ensures that when the tools fall short (and they sometimes will), developers have the skills to build secure software from the start.