We published our first “Vulnerabilities Benchmark Report” last week, a synthesis of anonymous data from tens of thousands of students on our training platform, representing hundreds of companies across multiple industries. During the process of developing the report, we were stunned by some of our findings.
One example is the fact that injection vulnerabilities have been either #1 or #2 on the OWASP Top 10 for 14 years! When we dug into the reasons why the statistic has remained unchanged for as long as it has, it made a lot more sense. Injection vulnerabilities are the ones that are most often fixed incorrectly. So while development and QA teams may think that they’ve taken care of a weakness in their software after it’s initially flagged, the reality may be different.
Another startling discovery we made is that the most problematic vulnerability reported by our customers is the use of components with known vulnerabilities. It seems counterintuitive that companies would continue to use software components that they know have vulnerabilities. When we delve into the realities of the contemporary development lifecycle, however, we begin to understand why it’s a problem, and why it could remain an issue for a while.
Development teams are under increasing pressure to deliver more code, more quickly, than ever before. A Dimensional Research report titled “The Emergence of Big Code” states that over 50% of developers surveyed say that today, they have to work with 100x the volume of code that they had to work with ten years ago, and 90% of them report that they are under pressure to release code faster than ever before. Using open source or third party code components is a common solution to the problem of doing more with less time. Since this demand is unlikely to change (and likely will get worse), the use of pre-packaged code will likely persist.
The problems associated with shorter software development lifecycles can manifest themselves in numerous ways. For the developers behind open source and third party components, it could mean delays in developing and releasing patches to fix vulnerabilities because they’re focused on their roadmap. For the programmers who rely on these components, it may mean being slow to implement the patches or software upgrades because they lack the time to do it. Both scenarios propagate the status quo.
A Common Theme
While our report explores various perspectives on the topic of vulnerabilities, one common theme that stood out to us during our analysis was that the vast majority of these vulnerabilities would not have existed if the developers were properly trained in secure coding practices. Developers either did not know that the code they wrote contained vulnerabilities, or did not know how to properly patch the vulnerabilities that were discovered by their automated security tools or security team. We did some additional research in an attempt to find the root cause for this gap in knowledge, and discovered some interesting information.
A 2019 Forrester research publication titled “Show, Don't Tell, Your Developers How To Write Secure Code” reports that none of the Top 40 coding programs in the United States, and none of the top 5 computer science schools outside the United States, requires their students to take a secure coding class. This indicates that secure coding habits aren’t being inculcated early on in many developers’ coding journeys, and any poor habits they form related to application security are left unchecked. But does that really matter? Surely developers’ employers will help fill in the gaps in knowledge through professional training? The Ponemon Institute’s “Application Security in the DevOps Environment” report provides data that gives us reason for concern. According to the report, 53% of developers surveyed don’t get any training on secure coding practices.
It’s clear that there’s a strong need for more knowledge about secure coding principles. Universities have a massive role in ensuring that future developers are trained well in secure coding principles, but a much larger challenge looms - how do we tackle the lack of knowledge among developers who are already coding for a living? The instinctive response is "training", but what does that look like?
What Does Successful Secure Coding Training Look Like?
A successful training approach for developers in the workforce must take into account the current professional environment and the associated pressures that they face. Based on our experience, in order for security and compliance leaders to overcome resistance to what a Fortune 500 CISO described to us as “a “tax” to the job of the developer”, it’s critical to ensure that the training program meets the following criteria:
- Provides training that’s relevant to the student’s goals and objectives
- Motivates the student to learn
- Maximizes the student’s ability to learn, absorb and retain the information
- Fits seamlessly into the student’s work activities
- Provides feedback and measures progress and mastery of the subject matter
- Rewards the student, intrinsically or extrinsically
Adhering to these requirements will help ensure that the secure coding training is effective, resulting in a stronger security culture, and less vulnerabilities in your code.