Application Security Podcast

JC Herz and Steve Springett -- SBOMs and software supply chain assurance

January 12, 2021
Season
Episode

Show Notes

JC Herz is the COO of Ion Channel, a software logistics and supply chain assurance platform for critical infrastructure. She is a visiting fellow at George Mason’s National Security Institute and co-chairs a Department of Commerce working group on software bills of materials for security-sensitive public and private sector enterprises. JC and Steve Springett join  to talk all things software bill of materials. We define what an SBOM is and what it’s used for. We talk threats that SBOM counters, who started it, and what the OWASP tie in. JC concludes our time by explaining why now is the time YOU must care about SBOMS. We hope you enjoy this conversation with…. JC Herz and Steve Springett.

Transcript

Chris Romeo  00:00

JC Herz is the COO of Ion Channel, a software logistics and supply chain assurance platform for critical infrastructure. She's a visiting fellow at George Mason's national security Institute and she co-chairs a Department of Commerce Working Group on software bills of materials for security, sensitive public and private sector enterprises. JC and Steve Springett join to talk about all things software bill of materials; we define what an SBOM is and what it's used for. We talk about threats that SBOM counters who started it and what the tie-in is with OWASP. JC concludes our time by explaining why now is the time that you must care about SBOM. We hope you enjoy this conversation with JC Herz and Steve Springett. At Security Journey, we believe security is every developer's job. We work with our customers to help them build long-term sustainable security culture amongst all their developers. Our approach is to provide security education that's conversational quick, hands-on, and fun. We don't do lectures. Instead, we let the experts talk about what's important. Modules are quick, 10 to 20 minutes in length. We believe in hands-on experiments, builder and breaker style, that allow your developers to put what they learned into action. And lastly, fun. Training doesn't have to be boring. We make it engaging and fun for the developers, visit www.securityjourney.com to sign up for a free trial of the security dojo. Hey, folks, welcome to this episode of the Application Security Podcast. This is Chris Romeo, CEO of Security Journey and co-host of said Podcast; I'm also joined by Robert Hurlbut. Hey, Robert.  

Robert Hurlbut  01:55

Hey, Chris. Yeah, good to be here. I am a Threat Modeling architect. Glad that we have a couple of guests with us today.  

Chris Romeo  02:00

Yeah, definitely. We're going to talk about a subject that I thought I knew something about, but now I'm realizing maybe I don't. I'm looking forward to being educated here as well today. So, JC, we're going to start with your security origin story. It's always where we begin; we jump right in. We want to know, how did you get into this crazy world of security?

JC Herz  02:20

My security journey begins in the Pentagon. I was working in the CIOs office to help create some policies around open-source software. Cast your mind back to when we were having these wars about whether open-source was allowable within Enterprise Systems. There are a bunch of big vendors running around trying to spread fear, uncertainty, and doubt about open-source is GPL. If you use it, you'll have to put all your military code out on the internet; there was a lot of that going on. We bought lawyers, guns, and money. Mission owners, lots of lawyers, and some funding to do an essay of okay, well, how much open-source is used in these systems? What would we do if we had to live without it? Those answers were not the answers that the proprietary-only crowd wanted to hear. We created a lot of policy to make the Defense Department safer open-source and vice versa. This led to this whole business of supply chain. Because the one thing that the open-source did not have that the vendors had was someone standing behind them to say, we'll send you updates, we'll send you patches, we're going to be continuously responsible for maintaining this, or at least we say we are. The whole business of operation and maintenance O&M, in the geeky acquisition speak, came down to All right, so we won the war, open-source won, everywhere. It's there, and now we have to get away from this business of open-source versus proprietary because open-source is in everything. That distinction is no longer useful. We have to start making these distinctions about open-source versus open-source like how do we treat these projects as suppliers, which is what they are, these communities and developers who are either maintaining or not maintaining their software are suppliers. The analogy that I like to use is that a Gemological Institute of America certified diamond and a crazy jewel, grocery store vending machine ring are both technically jewelry, but you wouldn't necessarily want to put them in the same category; that's no longer useful. If we are serious about supply chain, which is in the news and an issue for all kinds of reasons related to telecommunications and other things, We need to start to understand the quality and the maintenance histories of what we're using and whether those things are defective or whether they're vulnerable or whether they're actively maintained. That led to the founding of Ion Channel, which I help run, which has to do with the continuous analysis of what's going in the open-source supply chain. So that we can help our customers who are generally in critical infrastructure and energy and telecom and defense, understand not only the CVE checking, which is very cute and needs to happen. But what are some of the leading indicators of risk that tell you that maybe not today, maybe not tomorrow, but sometime, you may want to not use that open-source component and use another one instead? Or require that people meet certain standards for supply chain risk management. Some of this gets bureaucratic and into these high assurance standards. But part of the work that I've been doing with Steve Springett, at OWASP, with SCVS, has to do with how do you make all that stuff lightweight so that you have a way of assessing your capacity and your posture without having to hire five people to meet NIST 805 three controls, which none of the open, innovative people can afford to do. I think the ambition is to bring up the security posture, the same way that a lot of critical infrastructure wants it to be because we're all the suppliers, you're getting down to some very small organizations, but how not to crush the small innovators who can't afford these heavyweight, bureaucratic, costly compliance regimes to get certified for this or that, which they're not going to do anyway.

Steve Springett  06:52

You mentioned the fact that Steve is also here, and we're happy to welcome Steve back for his third appearance on the Application Security Podcast, which puts him, I was thinking about some other people who are in that Jim Manico and Adam Shostack; it's a very distinguished group. If you want to hear Steve's previous appearances, you can hear him talk about dependency check and dependency track, that was in season three. In season five, he did a discussion with us about an insider's checklist for software composition analysis. Steve, we're glad to have you back with us as well. Thank you very much. Glad to be here and very privileged to be in such an awesome company. JC, let's jump into this. When you were telling us, as we were preparing, you were talking about how some of the government's cyber assessment tools have critical software vulnerabilities. That was a pretty shocking statement, as a big picture idea here for me, so help us to understand more about that situation.

JC Herz  07:51

When COVID started, the whole lockdown started, Ion Channel began analyzing and monitoring a bunch of critical infrastructure capabilities, a lot of healthcare capabilities. While we were at it, we said, let's take some of the open-source software that the government is maintaining and put it into analysis to make sure that that's good. I'm sure it will be, well, it wasn't. There were tools that the government was putting forward as cyber assessment tools. One of them belongs to the critical infrastructure security agency. It's the CSET tool, and the other belongs to the National Institute of Standards and Technology, NIST. It has a supply chain interdependency tool, which is a piece of software that you're supposed to use internally to put in who all your suppliers are, which is operationally sensitive data. Then it will tell you something, visualize for you what your supply chain interdependencies are. The problem with this is that it has critical vulnerabilities in it, both critical and high CVEs, as well as a whole bunch of other supply chain risks, which people don't even govern on. It is being made available as a binary download from a website maintained by the federal government. Now, if you're maintaining an open-source project as research, and you say, okay, here's the source code, if you want to take the source code and build it yourself and run your SAST and do your assessment. That's fine. That's cool. That's great. The minute you compile something, and you make it available as a binary from your website, that is not inspectable. You are a supplier, and you should not be a risky one. I am channel emailed this to tell them about these findings. The response that we got was, oh, well, you know, it's a research project. There's a workaround because if you run this tool on a website, on a computer that's not connected to the internet, it's not a fault problem. I, as diplomatically as I could, responded with the fact that if someone downloads a binary piece of software, an executable from a website, and can install it on their computer, in their workplace, it means A. there's very little in the way of system security in that enterprise, and B. it's connected to the internet. Why don't we go ahead and fix it, and you get down to it. It turns out that the Boston Consulting Group, which helped build this, hadn't updated in 11 months, and a lot of these vulnerability issues come down to maintenance. We wrote a white paper about this, you know, essentially boring is sexy. The biggest risk is boring. It's not about shadowy actors coming up with these custom-type Stuxnet attacks on enterprises they've done reconnaissance and surveillance on. It's a whole bunch of potholes not being built. Everyone's talking about backdoors, Huawei backdoors; you don't need backdoors; when you read the finite state report on the Huawei software, there were so many non-maintained components in that system like double-digit number of major versions behind that you didn't need to put a backdoor or any malware in these systems. It would be stupid to do so because if someone finds the malware, they can attribute it. It's much better from an information theory perspective to have plausible deniability; no one can prove that you intentionally failed to maintain your software in such a way as you knew exactly how to exploit it. Yet, these are the facts on the ground. Maintenance is, no one gets promoted for it; this is the problem. Everyone gets promoted for new features. They get promoted for we've rolled out this new capability. No one gets promoted on wow, that's a very robust and well-maintained capability you have there. It's not getting done, so software ages like milk, not like fine wine. These things are crumbling, and these incredibly sexy demo were projects that are not maintained. Because if you run your pipeline, SAS, the tools that we've seen the federal government maintains, they have built passing badges on their GitHub repos. The last time that build passing was done was seven months ago. I found one of those at a different federal agency. You can be passing at a point in time, but unless you're monitoring continuously, which is I mean, bested interest is what Ion Channel does, but unless you're monitoring continuously, you have bill passing, but you're critically vulnerable. You're worse off than before because ignorance is not as bad as the illusion of knowledge. This is what we have to deal with, and we maintain the maintenance records for all these components. There's one open hospital, that's the great one. We did all these health care capabilities, and there's an open-source project called Open Hospital. If you Google it, you'll see a website; it's everything you need to run a hospital in an austere environment. The website has all these pictures of these gorgeous African children. It's maintained by like Informatica Qi, Song Frontiers in Italy, like Gordon; great nonprofit. These people claim that this hospital software is in 13 countries with 23 installations and has processed 425,000 patient records. If you scan it today, it comes up green; there are no highs or criticals, there are no viruses. We started monitoring this in April, and they had three Trojans in that software for six solid months. You can't know that up until two months ago. This thing had three viruses in it, and all those patient records, which includes vaccinations, births, healthcare data, that was all compromisable. The maintenance and days passing, days failing, meantime to remediation, which is a proxy for every other meaningful cybersecurity metric. If you don't have that, and you're not measuring that, and you're not holding people to that, you're nowhere, and I think this is what the SBOM enables, is at least know what you have like a software bill of materials, which is now required as a condition of FDA approval for medical devices. We're working with medical device manufacturers to figure out what it means for them to produce one of these things and for a hospital system to consume one. There's now contracting languages, Mayo Clinic, whose information assurance department; I would stack up against any three-letter agencies, they're amazing, they're elite, they now have terms and conditions in their contracts that say, if you want to use open source components in your vendor solution that you're selling to the Mayo Clinic, that's awesome. None of those components are allowed to be non-maintained. By non-maintained, we mean it hasn't been updated for a year; there's no point of contact in the package manager. If a security issue is identified, it is not remediated within 30 days. If you have one of these components in your vendor solution, and you want to keep using it, congratulations, buddy, you get to maintain it.

Steve Springett  15:59

A number of different thoughts and stuff, as you were going through there, I was kind of smiling a little bit like it's a meta idea that there was a supply chain solution that had supply chain problems in the tool itself. Then their suggestion was, you should run this thing in the basement; I remember, I've been in security for going on 24 years at this point. I remember when that used to be a recommendation that we made, like in the early days of security, people would say, Well, you know, you should take it, we're gonna take it, we're gonna bury it ten floors below ground in a bunker, and we're gonna run the computer there. That's why we don't have to care about authentication. There's going to be armed guards and everything else. If there's going to be armed guards, you're gonna have all these other things. Then somebody invented this thing called the internet, well, then that that strategy of protecting stuff was no longer possible.  

JC Herz  16:54

Yeah. You probably had to do some work to get a piece of software onto that computer ten levels down instead of downloading it to your browser.  

Chris Romeo  17:02

Yeah, yeah, it's a different world now. I was involved with common criteria when it first came to the US. I have a unique listing on my resume, I was part of the first commercial company that did a common criteria evaluation. We worked with the government as they oversaw, that would be a whole episode right there to itself about how they oversaw. That was a different world, though, that we lived in now. It's amazing that that was somebody's answer, though, was let's take the guidance from 20 years ago, and try to run it not connected to the internet.

JC Herz  17:45

Again, look at CMMC. This is a cyber security maturity metric for all defense contractors, or so we're told, and it's a three-year certification. Someone does an audit on you; they give you the stamp of approval, there's some holy water that someone from the Defense Acquisition Service sprinkles on you, and then you're good for three years. I think one of the SCVS principles that's in the controls; these are real controls that people in technology and industry in the commercial world can use Is this automatable? Is this a human telling you something? Is it a document journal? Or did a machine process create this information? Because Ion Channel, we take down bills of materials that it's 1000 rows spreadsheet, with the package name, and version, which is the nicknames for software that some vendor is using from their repo like, part of the value proposition is solving the naming problem. To be able to actually resolve like Joe's library version 1.2 to the Joe organization, and the URL where this thing actually lives, and it's actually not called Joe's library. It's called my favorite library in the supply chain. Unless you can do that, you're nowhere for vulnerability management. We see software bills of materials that are filled with the output of people spreadsheets, so it's beautifully structured data; the structure of it is gorgeous. But the quality of the underlying data is, it's gibberish. It's not going to match anything unless someone goes through by hand right to resolve; let me check in the tap, tap, tap. Let me look in the NVD. It doesn't work either, and that's the tragedy; people are burning hours on; let's write some Python scripts. That's the first one, or we know the Python scripts aren't really working; let's do it by hand, but by hand doesn't work either because the NVDs data is not complete, not only is there a bunch of CVEs that come from other places, but even the CVEs in the NVD aren't complete. Because there are third-party inclusions in the packages that are not, Joe's library may not have any CVEs against it. There's a whole bunch of things in a container for Joe's library or in a package in the package manager that's a runtime dependency for Joe's library that is actually vulnerable.

Steve Springett  20:38

That's because Joe's library is made up of component pieces, Joe took pieces of other people's open-source and put them together into his own library.

JC Herz  20:48

That's an issue; when we go through, we ingest the graph of everything in GitHub, everything in a package manager, all this other stuff. The problem is that if you have, this is especially an issue with containers; when you have a container, and to some degree binary packages as well, you have the piece of software that you want, the thing that is the label on the box, you're getting Joe's library, and Joe's library has been assured. You've done this independent assurance process of like Joe's library as a name, you've looked it up somewhere, and you're good to go; the thing in the box is perfect. The packing materials are toxic; these vulnerability databases don't track the packaging materials. The packaging materials actually contain lead and all kinds of heavy metals and are radioactive, but they're not registered as contents. We'll look at a package, and there's a runtime dependency on the package. Essentially, it's a sidecar dependency; it's not in the thing. It's around the thing, but it comes with the thing. If you take the box and put the box in your enterprise, you have that vulnerability, and the NVD won't tell you that. There's a huge amount of false negatives because we're not in our new paradigms for packaging software, taking into account the fact that the packing peanuts are actually contents. If you want to take a box that has packing peanuts in it, you'd better assure those packing peanuts because if they're vulnerable, so are you.  

Chris Romeo  22:34

You mentioned something about people, in your experience, are creating the list of the open-source that's in their stuff manually. They're putting that in a spreadsheet; that's a common practice right now?

JC Herz  22:49

I think that there's a bias when you get a bunch of technology people together as experienced and astute as you and Steve, that we live and breathe technology. We're generally, if not on the bleeding edge on the leading edge of what's going on. We assume that when you create a piece of software, you're going to set up a Git repo, you're going to put it into a CICD pipeline, and you're going to have these automated manifests. That's how the world works. That is not the vast majority of deployed software, particularly in legacy systems. The rest of the whole world is very manual. We even see things like manifests and requirements dot txt, and all kinds of dependency files, even in GitHub repos that are made by hand. They're these artisanal things, and the hard, boring problem that we solve is, how do you go from that list of things that someone made by hand, which could be like they called a floss list. It's the Excel spreadsheet that contains your open-source components. Your lawyers make you keep that list because you don't want to get crosswise with GPL. That's where a lot of this information is coming from because the enterprises that are maintaining these capabilities and selling them, and these can be global consulting companies, they have no provenance or chain of custody on components. They have no idea where that stuff came from; they have a bunch of JAR files that are lying around. To talk about going in forensically determining where their stuff came from, you can't do it; the data isn't there. The person who put it there hasn't worked at the organization for four and a half years. That's the reality of the rest of the huge amount of software that exists in the world, not among the forward leading fast follower or early adopters who are all on this podcast.  

Chris Romeo  25:02

That's a good reminder for us, and I never actually thought of that. But it's a good reminder that not everybody does approach software from the same perspective that we do, meaning they're not doing those things that you talked about. Steve, I want to get you to come in here and talk about SBOM. We've mentioned SBOM a couple of times. That's the thing that I said I thought I understood before we started the conversation. I'd love to get some more perspective of what that is to help us and our listeners.

Steve Springett  25:34

Yeah, it is the list of ingredients. When we talk about SBOM, we are referring to a software bill of materials. Like any bill of materials, it is simply a list of ingredients of if I have a piece of software, what is in that thing. The majority of things that are going to be in there are going to be open-source components. But SBOMs can also describe all your first-party components as well. All your third-party, open-source components, all your first-party components, and potentially services that you also depend on, depending on if you care about including those dependencies as well. It is the full list of ingredients. Once we have that, then we can do some really interesting things with that information, including a lot of the supply chain type things that JC has been referring to, without the full SBOM. We're really looking for that full, not only my direct dependencies, but all of my transitive dependencies, and all of my runtime and environmental dependencies as well, that really should be included in the SBOM. Once I have all that information, then, like I said, you can do some really interesting things, as JC was alluding to, with different forms of analysis that you can't get with a lot of tools today.

Robert Hurlbut  27:01

I'm curious, JC, with SBOM, when I have that, what kind of threats could I counter if I have that list of ingredients, as Steve mentioned. What would that help me with?

JC Herz  27:13

In the cybersecurity world, folks who focus on threat, there's a specific definition of threat, which is particular actors who are trying particular things. Threat and vulnerability are both two sides of the same coin. If you have a software bill of materials, first of all, on the very basic level, you can figure out whether there are any known vulnerabilities associated with your stuff. You do a CVE checker. This is table stakes at this point; it's not even fancy. Beyond that, what you can start to do is understand what are the supply chain risks associated with each of those components. There are different kinds of analysis you can do; you can go all the way from like a fully automated platform, that's looking at maintenance histories, which we do by person researching, who are these developers, in some cases that's pertinent, one of the things that we do is flag change of control events. If you have an open-source component in your stuff, and you know, it's in your stuff because you have a software bill of material. You're using a third-party capability like ours, or you're doing it yourself to look at well has this component changed control. Is there someone new who is now maintaining it? That's not necessarily a bad thing, oftentimes; it's a great thing; you have a project that becomes an Apache project. There are some graduation exercises for open-source projects that become important enough that they are great. However, sometimes you have a broadly used component with a maintainer who doesn't care anymore, who's bored with it. Some nice person offers to take it off his hands, and that is a security-relevant event that if you are maintaining an infrastructure that needs a high-security posture, you should be aware of as soon as possible. That's the monitoring that you can then do. The other more sophisticated stuff is, we're looking at leading indicators of risk to say, what are the components that are going to have CVEs eight to 12 months from now? Because the whole CVE process, vulnerability management as CVE remediation, I call it and this is a technical term, the insane Whack a Mole gerbil wheel. Because the CVE, by definition, is a lagging indicator of risk, and there's no way you can ever win that, it's a red queen game. What we're looking for is based on a lot of machine learning if you look at the publication of a CVE, which could also take eight months. Because someone has to develop an exploit, they document it; they have to submit it, then the vendor or the maintainer has to do a fix before it's published. That's a long thing. There's a lot of exposure time. If you roll back the tape, what are some of the indicators or combinations of indicators in the supply chain that tell you that there is a high likelihood that an exploit already exists for this component that will be revealed to the world in eight to 12 months because that's what you want to do. Because if you can do that, you don't have to do the insane Whack a Mole gerbil wheel of CVE remediation because those components are not in your system. That's where people should want to go. It's where a software Bill of Materials helps get them started. I think on a first pass without using anything ultra-elite, you can start to look at your level of technical debt. Because even aside from which of these components are going to have CVEs or not, or Ion Channel tells us this is bad juju in here, you can see, well, if this thing has a vulnerability, how easy is it going to be to fix it? If there's so much technical debt in your enterprise system that are going to be hard for you to refactor it because the security updates is going to break it. That is the first clue that you should be doing something different. From a threat perspective, I think we all have a lot to learn from the sort of left-pad scenario, is you have a broadly used component that, for one reason or another, is taken offline, no one can update, and while they can't update, while the shock, and all of that, all of these other vulnerabilities are exploitable. That would be a smart threat actors approach to compromising systems wholesale. Unless you know that there's a left pad in your system. Why are we dependent on a library for indent again? This is supply chain risk management; this is not vulnerability CVE checking, although you have to do that. That's the crawl walk before you can run. If you don't have a software bill of materials that tells you what's in your stuff, you really cannot then analyze what your supply chain risk is the same way a manufacturer would. If I'm a manufacturer of a physical product, and I have a critical component coming from a single factory on the tropical coastline of a politically unstable country, the part might be fine, but there is risk there. That's a supplier risk that I need to understand. I think what the software Bill of material gives you is a starting place to say, this is in this thing that I'm buying or this thing that I'm making? How can I then start to look at the known vulnerabilities and all those transitive dependencies that Steve mentioned, as well as some of the other risks that I can either monitor or buy a service to monitor or do by hand or understand some other way?

Chris Romeo  33:30

I want to switch gears and ask the OWASP question in a second. JC, I'm curious, from your perspective, why is now the time for SBOM? Application security has been around since the early 2000s. We've had this problem since as long as software has existed, I think you could argue all the way back to the beginning of software, as soon as someone invented a library, then you had this dependency, vulnerability, potential problems. Why is now the right time? Is there some Flashpoint or something that's happened or what's the driver for SBOM?  

JC Herz  34:06

I think it's a combination of things. One is that the threat landscape has changed dramatically. If you look at the notion of maintenance, and you look at the ransomware attacks that have happened because people do not maintain their systems or because products aren't maintained, that's real to people now in a way that it wasn't before when you have a city of Baltimore or school systems or hospital. The other thing is we've moved from these attacks that have to do with the compromise of confidentiality. The Target attack, the Equifax attack, as horrible as it was, was a bunch of data about consumers that was exfiltrated. They compromised confidentiality. It did not compromise availability or reliability when you start getting into these other ilities, especially in things like electricity and power and utilities. That's a different realm of what people are worried about. My motto as a professional is preventing the zombie apocalypse every day. That is different. The other thing is that there's regulatory pressure. When you have to submit a software bill of materials to get FDA approval for a medical device, roughly a kajillion dollars worth of industry now has to pay attention. We're seeing the same requirements play out in critical infrastructure and energy and anything nuclear and defense and finance. In finance, the banks will require a software bill of materials if you want to sell them some awesome AIML packages for their trading desk. If you don't give them one, there's one bank that will immediately require a 40% discount to reflect the higher cost of assurance if they buy your product at all. There are certain libraries that if you have in your stuff, you give them an SBOM, certain libraries, if they're in your stuff, you are not going to sell your product to JP Morgan Chase, it's not going to happen because these guys are not willing to assume that level of third-party risk. When some of these large, highly security-aware organizations begin to exert their requirements on their suppliers, as a condition of procurement or as a condition of acceptance, how the people get paid. That's when it starts to get real, and the thing about these requirements is that they tend to flow down. Now you have to figure out how to exert this level of situational awareness with regard to your Eastern European contract outsource software developer or your Bangalorean app developer. Now it's your problem. That's how supply chains work. They are multitier. When the people who are the downstream consumers who have the money start to exert these requirements on their first concentric layer out, those people are then forced to exert, at least attempt to exert, those requirements on that next layer down. Now the fortunate thing for open source is that it's open. The more open-source that we use, especially open-source that is responsibly maintained, the less opacity that we have to deal with. The things that I find concerning is not that there are so many of these vendors are throwing up all this fear and uncertainty of open sources, Sonatype, blacked up, all these guys are like, You should be scared, we can protect you. This is a ridiculous fud. The real problem is when you have a subcontractor folded in some licensed, proprietary third-party component that he doesn't want to tell you about because he views that as his competitive advantage. It's the unknown unknowns, with open-source, even if you have a first-level dependency list. Ion Channel, we resolve those all the time to full the transitive dependency analysis. There's a whole bunch of open-source tools that can do that, too. But there's something that is not disclosed because it's a license component, that is the one, that's the thing that's going to kill you. I think that monitoring open-source components and having them in our SBOMs, that's all very important. But I don't think we should let a bunch of SCA vendors tries to scare us into thinking that open-source is the problem because we all have the tools that we need to become aware of it and to manage it. It's time to grow up and not be cowed by these requirements, but to equip ourselves for the task and make the cultural decision that we're going to have to slow down the delivery of new features. We're going to have to maintain this stuff, and that's the hardest thing of all.

Chris Romeo  39:14

Yeah, it's always a challenge is trying to slow things down. But it sounds like now is the right time. It's probably been the right time forever for this particular problem. But there's enough attention; there are enough things that are happening that people are dialing in; like you said, we always follow the money. When money drives the need for these types of things, then that's when people start to take a lot more action.

JC Herz  39:41

I think that Steve, you speak so well about the requirement for automation. I would love to have Steve lay out the case for SCVS as a metric for automation. To assess the quality and the comprehensiveness of your data because I think that's very important.

Steve Springett  40:00

Indeed, as JC mentioned, there's manual work being done in legacy systems, especially for Legacy languages and whatnot C and C++ and sembly. Device drivers are part of the software stack that we care about. But a lot of the things that, especially around SCVS, which is a verification standard, the majority of requirements there can be automatable. This is important because it allows modern development shops, it will allow any shop to create some test that can continuously monitor an organization's maturity in regards to what their supply chain capabilities are, and that's important.

Chris Romeo  41:00

SCVS is a relatively new OWASP project, that's within the last year? What's the adoption look like for that? Are people starting to lean into it and seeing that as something that, hey, I can bring this into my company, and we can start to use this? Is this a standard that's trying to influence regulators like what's the purpose of SCVS?

Steve Springett  41:21

That's interesting; there's a lot of related OWASP work that's out there. You've got an SCVS, which is the software component verification standard, you've got dependency track, which is a way to analyze SBOMs, also an OWASP project. OWASP is not a standards body. CycloneDX, for example, is one of three SBOM formats that come out of the OWASP world but is not an OWASP project. There are a lot of related things, and the commonality between these efforts is really about shifting the focus to the whole whack a mole thing that is not a strategy into thinking in terms of the supply chain. The OWASP top 10 has A9, A9 has been there since 2013. It needs to go away. A9 is a subset of a much larger problem. I hope the OWASP, top 10 folks take that into consideration. I created a ticket for their GitHub repo a year and a half, two years ago, to hopefully bring A9 into a more category type of thing. We're talking more about supply chain risk. But the OWASP projects that I'm involved with, whether it be DT dependency track or SCVS. They're trying to shift the focus towards supply chain, SCVS being the most recent one, where we give close to 100 requirements, about 90 or so requirements. Like all verification standards, all OWASP verification standards it's pretty much three levels. Level one, I know how to spell security. Level two, I care about security. Level three, this stuff is important. We try to make it approachable and lightweight. We want organizations to be able to measure where they're currently at and work on improvement plans. Right. When I would tell historically, when I would tell people in the security space, other technologists, when I would tell them that one of my former employers, it took us two years to secure Maven, two entire years from going from running ant and having dependencies checked in a version control system, to using Maven and having the same level of assurance, it took us two years to get there. The activities that we had to undergo are requirements in SCVS. It's more about the holistic supply chain thing. If you use a lot of these things out of the box, they will work, and that's the intent of them, but they're not going to work securely. SCVS is a way to shift the focus to talking about supply chain things and make it approachable, so organizations can adopt it and improve over time.  

Chris Romeo  44:46

We're about out of time for our conversation for today. JC, I'll start with you. Steve, I'll come back to you as far as like, what's the key takeaway or a call to action? What do you want our listeners to do as a result of our conversation today?

JC Herz  45:02

I think that two things would be software bills and materials. Can you produce one for your system? Whatever you're developing, if you are a developer, if you're a consumer of software, and you're not developing, are you requiring a software bill of materials from the people whose software you're using? That's number one. Then number two, I think it's worth looking at the OWASP software component verification standard as a benchmark to say, Well, where are we here? Do we have half of the level ones? A quarter of the level ones? Let's pick something and work on it as a way to start on the journey. From the Shire to Mordor, we're all implementing all this stuff at speed and scale. It's worth looking at because it's a rigorous and yet not 480 page, which is witness does to say, Okay, where are we at? Then how can we start? Because starts in a place, and everyone can get to a better place. We can all be constructive about this without the emotional baggage of shame and guilt because all our systems are garbage.

Chris Romeo  46:17

Yep. Steve, did JC take all the good key takeaways?

Steve Springett  46:21

No, I will second the whole SBOM adoption. SBOMs are already important, and they're going to be increasingly important in the near future. It is a sign of organizational maturity; organizations that can produce one through automated means have a certain level of development and maturity, that kind of maturity is going to be an expectation going forward. Adopting SBOMs is your strategic advantage, regardless of how you produce them. Being able to do so is going to have many technical and economic advantages to you.

Chris Romeo  47:03

All right. Well, JC and Steve, thank you so much for educating us about supply chain risk and SBOMs and SCVS, and I learned a lot of different new things today. We'll have to do another conversation in the future to continue this because I feel like there are a lot more things we can talk about, but thank you for your time today. Listeners, you have some homework there. Get going with this SBOM thing; check out SCVS; you've got your assignments. Thanks for being here today.  

Steve Springett  47:34

Thanks, Chris. Thanks, Robert.  

Chris Romeo  47:35

Thanks for listening to the application security podcast. You'll find the show on Twitter @AppSecPodcast and on the web at www.securityjourney.com/resources/podcast. You can also find Chris on Twitter @edgeroute and Robert @RobertHurlbut. Remember, security is a journey, not a destination.

Need more information about Security Journey? Get in touch.

Ready to start your journey?

Book a Demo