Skip to content

The Magic of Developer Empathy: A Thoughtful Conversation About Product Security

Security Champions Podcast

Published on

Reflections on my recent Security Champions Podcast episode with Jacob Salassi 

As someone who’s spent years working at the intersection of software engineering and application security, I find myself constantly drawn to people who challenge norms, cut through noise, and speak honestly about what actually works in real-world engineering teams. 

That’s why my recent conversation with Jacob Salassi on the Security Champions podcast stood out in a big way. 

Jacob brings a rare blend of deep technical experience, strong systems thinking, and an almost stubborn insistence on aligning incentives. Not for the sake of process, but for the sake of people. His career path has taken him through network engineering, distributed systems development, and ultimately into AppSec leadership. And at every turn, he’s asked the hard questions about how software should be built and secured. 

But there was one part of our discussion that really stuck with me, and I think it’s worth pausing to reflect on it more deeply.

The Magic Box: Automating Security Requirements with Context 

At one point in the conversation, Jacob dropped this gem from his time at Snowflake: 

“Now, the magic we invented was how do you automate the creation of the security requirements deliverable? Or how do you make it super easy for the developer to get through the threat modeling process or whatever you want to, you don't really care. It's a magic box that understands the design and spits out relevant security requirements on the other side.” 

This idea—a system that understands the architecture or design of a system well enough to generate relevant security requirements—is deceptively powerful. This is beyond the tooling conversation. It’s a philosophy shift. 

Here’s why it resonated so much: 

1. Empathy in Practice 

It’s not just about automation, it’s about lowering the barrier for developers to do the right thing. Instead of expecting them to become experts in threat modeling or compliance checklists, we build tools that meet them where they are. It’s empathy in action. 

2. Using Markdown as the Interface 

One of my favorite details Jacob mentioned was the use of Markdown as part of this process. Why? Because Markdown is developer-native. It’s the format engineers already use in README files, design docs, and wikis. By embedding security logic into Markdown, rather than forcing developers into a bespoke form or unfamiliar system, Jacob’s team made it feel natural. Developers could document their decisions, trigger requirement generation, and show evidence of coverage all within a format they already use. 

This is a subtle, but game-changing example of developer empathy. When security tooling adopts the developer’s language, literally and figuratively, it starts feeling like part of their own process, not an external “tax” that just needs to be paid. 

3. Shifting from Policing to Enabling 

In the traditional model, AppSec teams are often seen as the “gatekeepers,” reviewing design docs late in the process, flagging missing requirements, and throwing tickets over the wall. Jacob’s approach flips this. It says: what if we embed security logic directly into the developer’s design flow and make it self-service? 

4. Binary Simplicity with Traceable Evidence 

Jacob described two possible outcomes from this “magic box”: 

  • Either: 
    “You risk assessed and said no new requirements, all requirements are met, and you had evidence of that.” 
  • Or: 
    “You said, whoopsie, there's no requirements. So I created these new ones. Here's how I know they're good. And here's how I show you I met them.” 

That framing is brilliant. It’s not just output, it’s decision-making with traceability. Security becomes an audit trail of contextually relevant decisions, not just a checklist of controls. 

How it Worked in Practice 

Ok, all well and good, but let’s dive into the details of what this all means and how it works. In order to maximize clarity, this section was written by Jacob himself: 

For most product orgs, requirements are the currency of product development. Requirements go in, products come out. Security is just another stakeholder who needs to come to the table with relevant and actionable requirements for product teams. 

Understanding what the product is doing and how it does it are table stakes for defining requirements.  

In the beginning, each project feels new and requires deep thought and discussion to understand and produce relevant requirements. As time goes on patterns emerge and create opportunity for abstractions. Good engineering teams create useful abstractions to make their work faster and more reliable. 

A good example of this in security are web frameworks that mitigate XSS by default.  

  • In the beginning you have to look into each feature and find out that XSS could be a problem. 
  • Eventually you realize all or most web features can have XSS 
  • You adopt a framework that mitigates (or abstracts) XSS by default 
  • You now know that web features can mitigate XSS by using a web framework 

A question every product manager and engineer faces is whether or not they have all the requirements they need. 

To answer whether or not you need security requirements, the inputs are technology and use-case. We call this risk assessment.  

In the best case, you have seen the technology or use case before and right away know the requirements, and can exit the process - think XSS and web frameworks. In the worst case, you have never seen it before and now need to do deep thinking and discussing - think threat modeling. 

Making this deterministic and traceable followed 3 basic phases: crawl, walk, run. 

Risk Assessment 

Crawl  

  • Create a markdown template that identifies the primary technology and use case by enumerating all known values and asking the user to select (GoSDL was my first inspiration for this). 
  • Fill out the template by creating a pull request for your projects “risk assessment.md” in a centralized security-review repository. 
  • If you are using known technology and known use cases with known requirements, no further review is required. 
  • If any part is unknown, or has no requirements, you need to create ones specific to your technology & use case with threat modeling. 
  • Get mandatory peer review from security partners and teammates. 
  • PR & merge are linked to JIRA tickets and provide a durable security review evidence chain. 

Walk 

  • Create team-specific versions of the assessment and the ability to maintain and extend them. 

Run 

  • Replicate git data in a relational database. 
  • Abstract filling out the template, commit, and peer review behind a web app that orchestrates it behind a single pane of glass. 

Ending value proposition: The system knows who you are, what you and your team typically work on, and asks you a set of relevant questions that quickly help you exit the process safely with the necessary evidence. 

But what happens if you are not using known technology for a known use case? We have to have a way to create relevant requirements on demand, and store them for future discovery and use.  

Threat modeling is one way to analyze a given tech stack and use case to discover any necessary security requirements. 

Threat modeling 

Crawl  

  • Create a markdown template that guides the reader through creating a data flow diagram and running a classic STRIDE per element exercise against it. 
  • Threats and mitigations are expressed use the Gherkin BDD language that looked something like this: 
  • Because of [risk] causing [loss] 
  • Then [mitigation] 
  • Fill out the template by creating a pull request for your projects “threat model.md” and “data flow diagram.png” in a centralized security-review repository. 

Walk 

  • Automate RTMP 
  • Shifting DFDs from flat images (png) to structured data (XML from draw.io) 
  • Providing scripts that read the XML, apply RTMP, and output partially completed Gherkins 
  • Because entity A accepts traffic from less trusted entity B over the open internet 
  • Then [spoofing mitigation] 

Run 

  • Replicate git data in a relational database. 
  • Abstract drawing the diagram behind a web app that orchestrates risk assessment, threat modeling & peer review behind a single pane of glass. 
  • Data mine all past Gherkins and use semantic similarity to suggest fully completed Gherkins that may be relevant for a design 
  • Categorize Gherkins by team and technology to enrich risk assessment and deflect future threat models. 

Ending value proposition: In exchange for drawing a diagram in our tool we will give you 80% of the threat model and mitigations. Never threat model the same thing twice. 

Read more about this journey on the Semgrep blog I wrote with Clint Gibler: https://semgrep.dev/blog/2021/appsec-development-keeping-it-all-together-at-scale/ 

– Jacob Salassi (https://www.jacobforthe.win) 

Bridging the Gap with Better Questions 

What Jacob’s talking about here touches something much deeper than just a clever technology solution. It asks questions like: 

  • Why do we assume developers need to manually create security artifacts in 2025? 
  • What’s stopping us from building systems that understand the structure of what’s being built and can infer what’s needed? 
  • How can we take the burden off of teams without compromising accountability? 

These are the questions that security teams should be asking if we want to scale without burning out our developers, or ourselves! 

Closing Thoughts 

This episode reminded me why I got involved as a host of the Security Champions podcast in the first place: to create space for honest conversations that challenge assumptions and inspire change. 

Jacob’s vision isn’t just about automation. It’s about compassion, clarity, and elevating the role of security in the developer experience. He’s not advocating for less rigor, he’s advocating for smarter rigor, designed around how people actually work. 

If you haven’t listened yet, I highly recommend checking out the full episode: 
Developer Empathy: A Thoughtful Approach to Product Security 
Available now on all podcast platforms. 

Let’s keep building systems, and cultures, that help developers easily do the right thing by design. 

– Dustin Lehr

 

Dustin Lehr

I started my career as a software engineer and application architect, spending over a decade writing code before transitioning into cybersecurity leadership. Today, I specialize in building security programs that drive real behavioral change—leveraging motivation, psychology, and gamification to create sustainable security cultures.

Dustin Lehr

Recent posts by Dustin Lehr

Security Champions Podcast
Security Culture

The Magic of Developer Empathy: A Thoughtful Conversation About Product Security

Reflections on my recent Security Champions Podcast episode with Jacob Salassi

Secure Coding Training

From Soft Skills to Hard Data: Measuring Success of Security Champions and Culture Change

The Application Security Endgame For software-centered businesses, Application ...