Skip to content

How do you Train Developers in Secure SDLC Practices?

How do you Train Developers in Secure SDLC Practices?

Published on

 

As the threat environment grows more serious, applications have become a more vulnerable part of the overall attack surface. To mitigate application-level risk exposure, it is necessary to embed security practices earlier in the software development lifecycle (SDLC)—the so-called “shift left” approach to secure software development. Making this work, however, requires secure coding training for developers. Secure SDLC practices don’t just happen. Indeed, even if developers have a keen understanding of security issues, they may not naturally understand where security fits into the SDLC. This article explores how secure code training works, what goes into it, and what developers can expect from the training process.

 

Understanding the stakes

Threats to applications are on the rise, with research from Contrast Security revealing that the prevalence of applications with serious vulnerabilities rose by 28% from 2019 to 2020. Their research also showed that the percent of applications with at least one serious vulnerability went from 8% to 34% of all applications.

When exploited by an increasingly potent array of threats, these vulnerabilities can expose a business to serious impacts. According to a report from F5 Labs, 56% of the largest security incidents over the last five years can be traced back to web application security issues. Web application attacks comprised the leading pattern of data breaches over six of the previous eight years.

It’s costly business, in any event. Cyentia’s IRIS 20/20 Xtreme research looked at the 100 largest cyber loss events over the last five years. These incidents cost businesses $18 billion in losses and led to the compromise of 10 billion records. Of these 100 loss events, 30 could be attributed to application security problems. Specific attack vectors included application exploits, which cost $1.8 billion, backdoor malware, costing $5.6 billion and misconfiguration, which resulted in losses of $732 million.

 

Code as a source of risk exposure

Software is a source of security risk exposure. To get a full appreciation of the problem, however, it’s necessary to see software as more than just lines of code. A computer program is assembled from code, but the full application—and its vulnerabilities—come from the code’s relationship to other structures in the application and integrations with external sources. According to the Open Web Application Security Project (OWASP), vulnerabilities arise from a range of deficiencies, including application configuration and access control, as well as from flaws in the code itself.

Some of the most common software vulnerabilities include:

  • Injection flaws—Untrusted data is sent as part of a command. This mode of attack can fool a targeted system into executing unauthorized and unintended processes that may lead to the breaching of protected data.
  • Exposure of sensitive data—Software applications may fail to secure sensitive information like passwords and account numbers, enabling malicious actors to access it without authorization.
  • Components that contain vulnerabilities—With the use of open source code on the rise, it is relatively easy to allow frameworks and pre-written software modules containing vulnerabilities into production code. This threat is particularly serious, as it embeds the attacker deeply into an application without anyone knowing about it.

These are just a few of many different vulnerabilities. Cross-Site Scripting (XSS) is also a problem, as are broken authentication and access controls. Misconfiguration and insecure deserialization have led to many serious data breaches, too. Deserialization problems can lead to flaws in the execution of remote code, which can in turn allow hackers to perform a variety of attacks on an application.

 

The problem with fixing code after it’s been deployed into production

It is possible to remediate security problems in software code after it’s been put into production. This is highly problematic, however. One big issue is delay. As F5 Labs' research shows, it takes an average of 254 days to discover an incident related to web application exploit. In that nearly three fourths of a year prior to detection, an attacker can cause immense damage. The other issue is cost. Fixing code that’s already up and running is far more time consuming and expensive than catching the flaw before the code is released, consistent with Boehm's Law.

 

Placing security into the software development lifecycle (SDLC)

How do security flaws in software code get launched into production without being detected? There are many reasons, but one of the most significant causes of insecure code is the software development lifecycle (SDLC). The SDLC consists of a series of steps, starting with the gathering of business requirements for an application and ending with the release of code into production. In between, developers write the code, testers test it and IT operations people put it into production. Security screening is part of the testing process.

This approach works well for the traditional “waterfall” mode of software development. In this linear process, one logical step follows another. Getting a piece of software through the SDLC took months, if not years. Things are quite different today. The modern SDLC moves a lot faster, for one thing. With agile methodologies and DevOps, which blends development, testing and IT ops into a single, iterative collection of process steps, the SDLC may be releasing new apps, or pieces of apps, every day.

Bolting security testing onto the modern SDLC is a deficient practice. Things are moving too quickly for security testers to do the thorough kind of assessment needed in today’s serious threat environment. Instead, it has become necessary to embed security practices into earlier stages of SDLC. This is known as “shifting left” on the SDLC, imagined as a left-to-right series of boxes on a page.

Shifting left may sound simple, but it isn’t. There are a number of challenges to embedding security into coding practices. Part of the problem relates to tooling. Software development tools generally lack robust security features out of the box. Specialized add-on tools are necessary to inspect code for flaws, for example. The developers themselves can be a source of friction. Most developers have not been trained or given exposure to security topics. Even among those who understand security well, the nature of their workloads and incentives may make it arduous for them to get involved in security. A developer might perceive security as a waste of his or her time. Additionally, lack of security training is a gaping issue, but one that is relatively easy to address.

 

Realizing secure SDLC practices

Making security a core part of the SDLC will involve training developers in secure coding practices. Even developers who are knowledgeable about security will benefit from this experience. Training methods vary, but the most effective approaches cover threats and how to mitigate them, structural issues in the application that affect code security, and offensive training—how to think like a hacker.

The HackEDU approach to training developers on application starts with a thorough immersion into the workings of the predominant threats. These are based on the OWASP Top 10 threats, such as injections, XSS, and remote code execution. The program also covers how open-source libraries get corrupted. Students learn how errors in configuration and passwords, to name two of many such factors, can drive increased risk exposure for software. Training also covers mobile security and API security, working across multiple development stacks.

Secure code training teaches developers to fix vulnerabilities in real-time. Security issues are dealt with as they are revealed. In some situations, gamification can be a helpful motivator to keep developer interested in creating secure code. It makes the process of remediating vulnerabilities fun and competitive.

It’s worth remembering, however, that training developers will only work if the SDLC adapts to a new way of handling security. If developers get trained in security, but the organization still treats security like an afterthought and doesn’t give developers the tools or time to fix vulnerabilities, the training will have accomplished little. Making code secure is a holistic process.

 

Conclusion

Insecure code invites big trouble. Data breaches and system outages are known outcomes of un-remediated code vulnerabilities. The financial losses resulting from code-related security incidents can be astronomical. To ease this problem, the SDLC needs to adapt. Stakeholders need to “shift left” and start to embed security into the development workflow and make security a priority earlier in the process. Operationalizing such an idea means changing development practices and adding specialized tooling. It also takes training, so developers can become competent in fixing security vulnerabilities in real-time during the SDLC. Once trained and equipped with the right tools in a secure development process, they can engage in secure SDLC practices.