For software-centered businesses, Application Security (AppSec) is one of the most critical enablers of cybersecurity’s broader mission. It addresses risk at the heart of innovation, within the software we build, maintain, and deliver every day. AppSec supports the cybersecurity endgame and increases software development quality by proactively reducing rework and vulnerabilities across the software development lifecycle (SDLC), enabling safe deployment practices, and empowering developers to build security into their daily workflows.
But AppSec isn’t just about "shifting left," one of the most overused and misunderstood terms flying around these days. The term implies moving controls from the right (production), to earlier in the development life cycle. I prefer to talk about adding security controls, starting on the “right,” in production, then “expand everywhere” and embed proactive security controls at every stage of the SDLC, from architecture and design through deployment and remediation. It’s not about relocating controls, but instead multiplying and distributing them in a way that aligns with the unique flow of each team. Taking this approach allows you to demonstrate that there are/were real, tangible, and exploitable issues in production so you can make a case for proactive controls earlier in the lifecycle. If you start on the left with proactive controls, you will have trouble making a case when executive leaders, or developers, ask why they are necessary.
In addition, security teams can’t do it alone. Security must be a shared responsibility. In the ideal state, security and quality expectations are officially integrated into job descriptions, performance reviews, and developer growth plans, not just relegated to one-off trainings or reactive clean-up efforts. The challenge, however, is that embedding this kind of ownership across an organization takes time and involves navigating complex approval processes.
That’s why it's crucial to pursue both top-down enablement through leadership, as well as bottom-up momentum, simultaneously. If you can prove, through metrics, that your program drives value by starting with a volunteer-based model, you can more easily persuade leadership to formalize and scale it. To accomplish this, you need allies. You need champions.
Culture is the connective tissue that holds your AppSec program together. You can deploy the best tools and policies in the world, but your program only becomes effective when people internalize security best practices and apply them in their daily work.
And that requires change.
And change is hard.
But the good news is that there are some people, right now, who are already aligned, and want to see the changes that you want. Somewhere in your organization are developers, tech leads, architects, or managers who already care deeply about software quality and security. You just need to find them, and encourage them to get involved to spread the message.
This approach aligns closely with Diffusion of Innovations theory, which explains how new ideas spread through a population.
To reach a self-sustaining adoption level, you must begin with the left side of the curve: innovators and early adopters. These individuals are key influencers who will help evangelize and scale the movement. Once enough of them engage and succeed, you'll reach a tipping point where the early majority comes on board, and you may even get to the laggards!
Or consider Derek Sivers’ famous video, "Leadership Lessons from Dancing Guy," which illustrates how movements happen not just because of the leader, but because of the first followers.
In your security culture journey, your champions are those first followers. They’re not there to do all the security work, they’re there to influence changes on their respective teams. They help shift norms, shape mindsets, and accelerate change by carrying the message as a known and trusted voice on their teams, not just some outsider. I know, it’s hard to hear, but folks will listen more closely to the people on their team when they share information about security best practices, than to the security team.
Regarding your first followers (champions) remember to:
The whole point here is, whether you’ve formalized a security champion program or not, your security champion program strategy is not an ancillary “nice to have” component to your AppSec strategy, your security champion program IS your AppSec program. Whether they’re officially called “champions” or not, it’s the followers who you’ve influenced who are carrying the right message to the right people at the right time to drive impact, contributing to the success of your program. They are the mechanism through which your culture change is scaling.
To help illustrate this point, I recently spoke with a security leader who was having much success influencing engineering senior leadership to drive down vulnerabilities in their respective areas. When digging further, it became clear that it was an internal influencer, a respected technical individual contributor on the engineering team, who understood the changes, and was there during meetings to be the voice that ultimately led to their leaders taking the initiative seriously.
To build a successful program through champions, metrics aren’t just dashboard décor, they’re extremely important strategic tools for influence. When used effectively, they:
Here’s a practical set of metrics principles upon which to build your metrics strategy:
Here’s an example of badges engineering teams can earn, which works better than a “red, yellow, green” score approach:
It is impossible to quantitatively demonstrate many security-focused behaviors’ direct contribution to reduced risk or cost. In these cases, aim to show correlation instead.
For instance, it’s not possible to show that secure code training was the cause of a developer writing less vulnerabilities using data alone. Besides asking them about it, you can never know that they were about to write a vulnerable chunk of code, but then suddenly remembered their training. But you can show this through correlation, and can even calculate the strength of the correlation using a Correlation coefficient, which builds a credible, data-informed story for leadership.
Use techniques like:
To receive results like:
Surveys are great for capturing perceptions, attitudes, and sentiment directly from the participants. But beware of overreliance. There’s often a disconnect between what people will tell you they want, what they think they want, what they actually want, and how they behave.
For instance, someone may rate your live brown bag secure coding sessions a "10/10," but never show up again. Surveys should complement, not replace, behavior-based observations and metrics. Be sure to combine all of them to get a full picture.
Finally, the moment you’ve been waiting for. Here’s where it all comes together!
When speaking with executive leadership, simplicity and clarity are key. Rather than overwhelming them with a long list of metrics, focus on three to four high-level indicators that represent the health of your AppSec program, powered by security champions. You should consider creating your own based on your own unique goals and culture, but here are the four I’ve landed on through many years of rolling out AppSec programs, categorized by the strategy in order of rollout/implementation (expanding from right to left in your SDLC!): 1. Connect, 2. Find, 3. Fix, and 4. Prevent.
This strategy creates a solid foundation that allows you to, over time, get to where you are preventing risk and technical issues across your organization by producing quality code, thereby accelerating your feature delivery by reducing rework. Imagine, developers not churning and wasting their time troubleshooting and fixing performance, functional, and security flaws in production, leaving more time to work on features! This is what maximizes productivity and keeps you within the boundaries of acceptable risk.
To get to this point, you should not start with step 4. Prevent as it will be very difficult to demonstrate, when asked by leadership at a later date, what rework and risk you are preventing. I see this mistake a lot… AppSec professionals will start with a preventative measure like Threat Modeling “because it’s the right thing to do” without demonstrating that there were actual issues in production that this practice has prevented. Instead, you should Start Right by putting reactive measures in place, then Expand Left to Expand Everywhere over time.
Without further ado, here are the specific recommended metrics to measure, one for each of the steps:
What it means: How often people/teams initiate engagement with security, by count and averages quarter-over-quarter.
Why it matters: This measures the culture’s general trust and collaboration with the security team. It also reflects how integrated security is into the development workflow. A few things must be in place to see this behavior:
It takes a lot to get to a point where you’ve built a solid relationship to see this behavior, and this simple metric can show the strength of your relationship with the org.
How to capture it: Count outbound security-related messages, reports of potential security concerns, mentions and assignments of Jira tickets to the security team, slack conversation/thread invitations, and voluntarily initiated requests, all per SDLC phase and role to capture the demographics of who is reaching out.
How to present it: “60 developers initiated security engagement in the design phase this quarter, up from 35 last quarter. The target is to maintain or exceed an average of 50 per quarter over the next year.” Have this calculated per team and department for more visibility and to understand opportunities for improvement.
What it means: The percentage of systems covered by automated monitoring and detection tools such as ADR, SAST, IaC scanning, vulnerable dependency checks, and so on.
Why it matters: It shows the breadth of visibility into your threat landscape. You can’t secure what you don’t know about.
How to capture it: Build an accurate inventory that includes an asset risk score based on asset attributes (public facing, data sensitivity, business criticality, interconnectedness [# of dependent systems]), then map monitoring and detection tool coverage across environments (pre-prod vs. prod), accounting for which tools each asset should reasonably have. Track percent coverage by risk score and environment type (pre-prod vs. prod). Start by setting targets of coverage for critical systems and work your way down.
How to present it: “75% of Critical Risk production assets are protected by risk monitoring and detection controls, up from 40% last quarter. The target is 90% by the end of Q4.” Have this calculated per team and department for more visibility and to understand opportunities for improvement.
What it means: The average time it takes to resolve issues, broken down by severity level.
Why it matters: MTTR is a proxy for risk exposure. Shorter MTTR means reduced time in a vulnerable state and shows the maturity of the processes in pace to detect, assign, triage, and remediate issues.
How to capture it: Pull from issue trackers like JIRA or vuln management systems. Track open-to-close times by severity. Be sure to track those that are still open to ensure unclosed tickets are affecting the average.
How to present it: “MTTR for critical vulnerabilities decreased from 22 to 14 days over the last six months. The target is 10 days in the next 6 months.” Have this calculated per team and department for more visibility and to understand opportunities for improvement.
What it means: The estimated engineering time and thus cost saved by preventing vulnerabilities and reducing technical rework earlier in the SDLC.
Why it matters: Prevention is cheaper than remediation. Escaped defects from one phase of the SDLC to the next introduce more attention, resources, rework, and risk downstream.
How to capture it: Estimate the true cost of fixes in production vs. earlier phases (design, code review, staging) by performing root-cause analysis (RCA), using the actual time spent resolving production incidents and vulnerabilities. Estimate the effort it would take to prevent the issue using preventative controls such as training, threat modeling, and so forth. Use average time-to-fix and resource cost models to show savings. Show how the resources could have been redirected to feature development that would have accelerated the delivery timeline.
How to present it: “Estimated $150,000 saved this quarter by reducing production escapes by 45%. Most issues were resolved during code review or testing. This time saved resulted in approximately 35 additional points of features delivered to production and increased our average velocity by 50 points.”
Also, continue to perform RCA and demonstrate, for each production incident: “This production incident cost us approximately $300,000 because of the time spent by our resources in resolving it. To address this going forward, we will spend approximately $25,000 to add a check for this issue during our design reviews”
Have this calculated per team and department for more visibility and to understand opportunities for improvement.
Preventative actions include threat modeling, secure design reviews, static code scanning, developer education. Use data to show a drop in vulnerability volume late in the SDLC, and tie it directly to these investments.
These four steps: Connect, Find, Fix, Prevent, and their associated metrics, can be used to tell a clear story about trust, adoption of security best practices, and the effectiveness of your approach. They resonate with both technical and business stakeholders, and they maintain relevance as your AppSec program matures.
AI is not a strategy. But it can amplify one. All of what has been discussed so far can be enhanced using AI, such as in the following ways, among others:
But always remember: AI is only as useful as the human processes it's enhancing. Use it to increase the efficiency of timeless best practices, not as a replacement for them.
The future of Application Security isn't defined by tools alone, it’s defined by transformation. True progress happens when we look beyond checklists and scanners and start shaping the systems, behaviors, and mindsets that influence secure software development at scale.
Security Champions are more than participants, they're the catalyst for meaningful change. When equipped and empowered properly, they help bridge the gap between security and development. Your metrics become the evidence that this transformation is real, revealing where progress is happening and where it needs further support. And ultimately, your culture - the shared values, behaviors, and habits across teams—becomes the system that sustains it.
Modern software security isn’t just about defense. It’s about enabling developers to move faster without compromising on quality, saving time and cost in the long run to reinvest in feature development. It’s about embedding security into the rhythm of software delivery in a way that feels natural, not forced.