tools, security, developers, application, working, vulnerabilities, library, people, organization, josh, findings, process, run, scanning, build, penetration testing, container, thinking, sca, threat modeling
Chris Romeo, Tiara Sanders, Robert Hurlbut, Josh Grossman
Chris Romeo 00:01
Josh Grossman has over 15 years experience in IT risk and application security consulting. And he's also worked as a software developer. He currently works as CTO for Bounce Security where he focuses on helping organizations build secure products by providing value driven AppSec support and guidance. In his spare time, he's very involved with OWASP. He's on the OWASP Israel chapter board, is a co-leader of the OWASP application security verification standard, or ASVS. He's also contributed to various other projects, including the top 10 risks, top 10 proactive controls and juice shop projects. Josh joins us to talk about high value AppSec scanning programs. This is a new idea that he and the folks at Bounce Security are developing and are doing some training courses around. We explain some of the basics of the tools that we use in AppSec. Then Josh takes us through some ideas for what are some of the challenges that developers have with these tools? And what can you do to be more successful in building a high value AppSec scanning program? We hope you enjoyed this conversation with Josh Grossman.
Tiara Sanders 01:12
You're about to listen to the AppSec podcast. When you're done with this, be sure to check out our other show, Hi/5.
Chris Romeo 01:22
Hey, folks. Welcome to another episode of the Application Security Podcast. This is Chris Romeo, CEO of Security Journey and co-host of said podcast. Also joined today by my friend, Robert Hurlburt. Hey, Robert.
Robert Hurlbut 01:35
Hey, Chris. Robert Hurlbut, Principal AppSec Architect at Acquia. Really glad to be here.
Chris Romeo 01:42
Yeah, that's a new title for you. I always used to say threat modeler to the stars or whatever, was what I called Robert's Hollywood people are like checking in with Robert for some threat modeling. Let's consider how this thing all comes together.
Robert Hurlbut 01:56
I am still focused on threat modeling. That hasn't gone away.
Chris Romeo 01:59
It's hard to imagine you focusing on anything else based on I think I met you in a threat modeling talk. You were delivering it in good old Detroit, Michigan. Our guest today is Josh Grossman. This is his second visit to the show. First time he was with us back in the early days of the AppSec podcast, back in 2019. I was looking in the Wayback Machine or trying to remember. I had a chance to interview Josh at AppSec USA in San Jose. That's how long ago it was. We talked about AppSec in Israel and the talk that he was doing at that conference. So it's great to have Josh back with us again. Josh, what have you been up to since 2019?
Josh Grossman 02:42
Thanks so much for that intro. Really great to see you again. Last time we saw each other was AppSec USA. That was really great conference, really interesting experience, my first sort of major conference. I guess what I was talking about, at that conference was, was penetration testing, which was what I was doing at the time, I was doing a lot of application penetration testing, sort of delivering and leading. I've worked for that company and other companies since then, in sort of a more application penetration testing role. What I was doing more and more over the last couple of years was more from the internal side, more working with development teams looking at how to help developers build secure software from the beginning, I'd sort of been doing more and more of that. Then a couple of months ago, Avi Douglen, who I think, we actually interviewed at the same time on that previous podcast, reached out to me and asked if I wanted to come work with him, and sounded interesting to have a chat with him. It became clear that the direction his company, Bounce Security, goes, and it's very much what I was looking at there in terms of working with developers, working with development teams, trying to build security in early on rather than just sort of coming in as a breaker. I made that move and now I'm very happy working with him about security.
Chris Romeo 04:03
That's cool. Obviously, he is good friend of ours and co-conspirator in the threat modeling manifesto. We got to spend a lot of time now. I want to dig in a little bit to this transition you've made from breaker to someone who's focused on helping builders, because it's a bit of a soapbox for me, but I think as our audience knows, they're like, Oh, get ready, here comes a soapbox moment. As an industry, we put so much focus on red teaming, pen-testing, breaking. Every time I speak to a university CS student who's thinking about security, I'm like, hey, what do you think you want to do? I want to break stuff, I want to hack the planet on my skateboard or whatever. For you as somebody who's made that transition from being more of a breaker into a builder, why make that transition? And then what are some of the things that you've learned about breaking and building through that little journey you've been on.
Josh Grossman 05:06
The why of it was about the impact; I enjoyed security as a whole, it was always a hobby, and it became a job. I think everyone, a lot of people in this industry, feels the same way that part of this is fun, and you feel very connected to the industry. What I saw was the breaking side; it's fun, interesting, it's constantly moving. I think we see the same thing over and over again; we see the same issues over and over again. I think the root causes is that it's not easy to build secure software because developers are saying the same thing; on the other side, things are moving very fast, issues coming on, and new problems coming all the time. It's not something they are immediately equipped to deal with; I think there's value in taking that sort of passion and taking that excitement about security and applying it the other way and saying, look, here's how you can make things more secure. I think that's where the real impact is. The impact of actually moving the needle of making things more secure and building software that is more secure is only going to come from working with developers. It's almost an extreme example, but I was at a local conference here, Blue Hat, which is run by Microsoft, and it's very much like a hardcore hacking conference. I was wandering around, and I was like, Yeah, this is a cool, stylized conference, but I realized that the big people to be talking to are the developers; they're the ones who have to build stuff securely in the first place. We can break as much as we want. But I felt like the real connection needed to be with the developers themselves.
Chris Romeo 06:36
I've summarized that in various conference talks over the years by saying you can't hack yourself secure. Everybody has this big focus on, let's break, let's red team, let's pen-test. At the end of the day, that's a security control or a check that happens after you've developed secure software, if you haven't put the effort in early in the lifecycle. You're going to be able to break it; you can break anything, but it comes down to the true place we can change and have an impact is earlier in the process. Kids, stay in school, study AppSec. Don't believe the hype about the fact that everybody has to be a breaker. Listen to Josh's story here and join us in the AppSec universe here.
Josh Grossman 07:25
I think all the same. I think penetration testing is still important and has value. That knowledge is important as well, but I guess it's important to see that breaking isn't necessarily the goal; breaking is a great way to learn, especially early on, but it comes to a point where you want to make a real impact that needs to be in the building side.
Robert Hurlbut 07:45
Josh, today's topic, we're going to be focusing on something you brought to us that you've been looking at, is high-value AppSec scanning program, and what do you mean by that?
Josh Grossman 07:57
I guess if once upon a time, back a few years ago, when you told a company, oh, you need to do an AppSec program, and you'd have an application security program. They'd be like, oh, yeah, we do application penetration testing, like that's our application security program. Certainly, we've moved on a little bit; I think one of the directions we moved in is now saying, well, we need more application security processes, we need to do more of this. A lot of companies have just run to say, Okay, let's get tools in, let's get a SAS tool, let's get static code analysis, let's get software composition that associates get, let's get dynamic analysis. Yes, let's get all sorts of tools in because I think there's an expectation, oh it's a tool, it's automated or sort of crank handle, and it will become secure. I think what gets missed a lot of time is that these tools have to be integrated into an overall plan; okay, we're going to use this tool, and this is how we're going to use this tool. This is what we're going to do with the results from the tool. This is how we're going to prioritize and interact between the different tools we've got. I think that's something that often falls between the cracks a little bit. The idea is you've got all these tools, but how can you build a program around those tools that you know what you're doing, you know exactly how you're going to react to the output of those tools as well. It's not just a cranking handle, and something runs; it's a What am I going to do about that afterward?
Chris Romeo 09:24
When I think about AppSec programs, people, process, tools, and governance. Robert, you'll remember our friend Alyssa Miller is the one to introduce us to adding the governance piece to that. Thanks, Alyssa; it's stuck with me since we had that interview a couple of years ago. People, process, tools, and governance. You're talking about the tools, obviously, and maybe a little bit of the process as well, plus the people plus the governance; there are probably pieces of all of those in what you're thinking about here.
Josh Grossman 09:55
I think that's one of the interesting things that comes down. If you added governance recently based on what Alyssa said, and I've seen quite often, you've got tools. We've got all these tools we've now implemented and expected to use. You've got governance because someone somewhere is saying you need to get the users, you need to have zero findings in this tool, or you need to have zero criticals and highs in order to be able to release. But they're missing the process in between, and the people aren't sure what they're supposed to be doing because that process isn't there. I think that nicely ties into the overall idea.
Chris Romeo 10:25
Before we dive deeper into this, it would do our audience well, that we don't assume everybody knows all of these four letter tools, and three letter tools. Why in AppSec do we have to have four letter tools? Why can't we just have, why don't we, we have SAST, DAST, IAST and RASP and then we have SCA. We couldn't add one more letter on to that, SCAS, I don't know, I'm just trying to make up something new. Josh, maybe you can just give us just a quick, maybe one or two sentence definition of each of those types of tools that you're thinking about that are part of this AppSec scanning program, just in case we have folks out there that are maybe they don't know all of the tools, maybe they just know one. We can educate them before we start talking about how we bring them all together.
Josh Grossman 11:08
There are a lot of different types. There are lots of different types of scanning tools, especially some of the more modern ones as well, that are looking at particular problems. From my perspective, what I've seen most frequently are four key types, three-plus penetration testing. I think penetration testing also has a place in this whole program, but it's a little bit of a different thing. If we look at the three other tools, you've got SAST, which is static application security testing, that's usually a scanner that will go through the code itself or the compiled binaries and try and look for patterns or flows that indicate that there might be a vulnerability at the code level. That's looking at the code that you, as the developer, or you as the organization, have written. There are SCA tools, software composition analysis. They're looking at the libraries that the third-party code you're bringing into your product. In general, they're looking to see, okay, what library are you using? What version is it? And do we know of any known issues with this library? That might be known vulnerabilities in the library that have been reported and have a CVE identifier. Or it may be that the library is licensed in a particular way that to comply with the license, you'd have to open-source your product as well. The third key type of tool is DAST or dynamic application security testing. We're doing testing of runtime while the application is running, sending malicious payloads to the application, seeing how it reacts, and seeing if you can deduce from how the application reacts once it's running whether it's got a vulnerability in it or not. It's almost like people call it automated penetration testing, and I get very upset because I'm like, No, this isn't penetration testing. Penetration testing is one thing. This is DAST, and it's effectively a form of pattern matching or following a pattern and seeing how it responds. The key thing here is it's happening at runtime, whereas something like SAST or SCA is more that it's static; the application isn't running or just scanning the application code or binaries or libraries.
Chris Romeo 13:09
Would you lump IAST then, interactive application security testing into that DAST bucket where you had the three buckets?
Josh Grossman 13:20
I'd put it as an advanced form of DAST. From what I've seen, in order to make IAST work, you need to have the application running, you need to have traffic going to the application, and differences that the IAST can detect when you get to certain parts of the application or certain vulnerable functionality and say, oh yeah, you hit that, and there's a vulnerability here. It's DAST+ in some ways because you're scanning dynamically if you've got a bit more information about what's going on behind the scenes in the code itself.
Chris Romeo 13:55
As far as RASP, are you going to, because RASP is in production, are you going to leave that off to the side a little bit from this conversation?
Josh Grossman 14:05
RASP usually gets lumped in with this little bit, but it's slightly its own thing. It's more of the production time. It's more the runtime detection, runtime response. My main focus at the moment is looking more at the development time practices and what's happening while the application is being built.
Chris Romeo 14:27
Now that we've laid the foundation of what each of these classes of tools are, where did this idea for this topic come from? Did you wake up one morning? I imagine people have ideas, they wake up, and they're like, Oh, I had this idea, I got it. We got to put together a solid AppSec scanning program. Where did this idea come from?
Josh Grossman 14:48
I think like I said, over the last couple of years. I've been working a lot more with development teams. I just saw how difficult this was for them. A lot of these teams had these tools in place already; a lot of them already had the tools there. But they were struggling with them; they weren't sure exactly how to use the tools, and they often weren't familiar with exactly what the difference between the tools was. They sometimes had very unrealistic expectations of tools and what they thought they'd get out of the tools, which were very different from what they were going to get out of the tool. Once it came to handling the output, they just were completely unprepared for that as well. If you think about penetration tests, you might get 20 findings, hopefully quite well explained and step through, you run an untuned static code analysis tools, SAST tool, you might get 1000 findings, and suddenly someone's left with these findings, thinking, what am I going to do with this. They had the tools; they didn't have the process. Even in very large organizations where they may have quite regimented development processes; working agile, they've got sprints, they're using JIRA, they got tickets, they got epics, whatever else. Then suddenly, they hadn't planned out, okay, well, how did the tools fit into this? Where do the security scanning reports fit into this? What sort of process are they? What sort of activity are they? This didn't have that process in place. A pain point I saw very often at the same time, these tools ultimately end up with developers; I don't think there are enough security people in the world to deal with these tools on their own. Most of the advanced vendors now are focusing their tools, these sort of tools, on developers. When it comes to talking to developers about security, it's either, Okay, here's how you write secure code, or here's the OWASP, top 10, or maybe even here's how you put tools into your CI/CD pipeline, the automation aspects. Again, I don't think there's much information out there about the actual process aspects themselves, but eventually, someone is going to have to look at a report and decide what to do. Someone's going to have to figure out, okay, where does this enter into my process? I think those sorts of processes are crucial for actually getting value from these tools and getting security benefits. I think a lot of organizations are struggling with that.
Chris Romeo 17:03
I can confirm what you're saying about developer knowledge of tools. One of the things we do here at Security Journey is we have content on SAST, for example. We don't talk about a specific tool. What I found is just introducing a developer to the idea of static application security testing and letting them know what is it? What's the value for you as a developer? What's the workflow look like with it? I'm not even talking about a specific tool, but just them having that understanding, and that knowledge changes the game when they have to use the tool. So many times, the security people were like, Hey, we bought all these tools and developers, you have to run them, this is the way that you have to do it. And yes, there are 10,000 entries that came out of it; I don't care, you're going to run it with everything turned on, you're going to eat your vegetables, and like it, you're going to deal with all these things as best you can. We don't think developer-first; we think security people first, and I'll admit it, I've been guilty of this in the past. This is how security used to think, and this new world has got to be, to your other point about the number of developers and number of security people out there, like we don't have enough security people to run SAST tools, for all the stuff that's out there. We have to enable these developers to be successful, understand what they're doing, and gain value from the tools. Because we also know that developers, if you lose them on the tool, you're not getting them back. If they have a bad experience with the tool and it somehow messes with their system, it generates 10,000 bugs or something for them automatically; they're never going to trust it again, for them. That's just been my experience in getting these tools in front of developers as well.
Josh Grossman 18:48
I think that last point, especially, is key, I think that these tools can suddenly become security, suddenly, the vast majority of developer security day-to-day experience becomes I've got more results from this tool that I need to deal with, Oh, no, not again. I've had organizations come to me and say, Oh, we really want to get our developers excited about security, so that they'll be more motivated to deal with the output of these tools. Well, you're never going to get developers excited about security from these tools, you just need to make them, find ways to get through these findings faster. Hopefully, that will free up time to think about other things and think about other aspects of security as well. That's how you're gonna get them interested. That's how you're going to get them more bought into it. Ultimately, these findings are going to get exciting, but this process can be made easier.
Robert Hurlbut 19:36
Josh, we've talked about the importance of these tools, and none of us deny that. Certainly, I know developers, they can look at this and say, Okay, I can see some value here. In terms of this AppSec scanning program, can you describe how you're developing that idea? How you're turning it into a program and helping out developers with these things they may be struggling with?
Josh Grossman 20:02
I guess this sort of thing has always been, this issue has been on my mind, it was an issue that I was acutely aware of, and then I started working at Bounce with Avi. He's obviously done a lot of training courses related to threat modeling, and to .NET security, he has been successful, well known for that, and he asked me in passing, oh, by the way, have you got any ideas about training courses, because that's one of the few things we do here, and I went home and like thought about this, a little bit. All these thoughts suddenly fell out about all this stuff that I've been thinking about, working with these organizations, seeing the struggle, it sort of fell out onto a piece of paper, I was like, wow, there's quite a lot going on here, quite a lot of ideas here. I think this is something that will be valuable to take to developers and to take to organizations. I sat down, did some more work on it, did a lot of sort of iterations with Avi and also with Adi Belinkov, she's very experienced in application security, also here locally in Israel, and she was working with us up to a month or so ago. By the end, we had a rough outline, we reckoned of a few days training, covering all these different tools and thinking about how we can use them better, how we can understand them better, I mean, we still thought maybe we should write this down in the very long document of a book or something. I think that there needs to be an interactive element as well, we're not gonna make this fun. We need to try and make it in some way engaging, we need to push developers through this process and find ways to help them work through it. In the end, we came down to a few, there's a few key areas about what we wanted to talk about, and what we think we need to get across to developers. The first is, as you said, understanding the tools better and saying, Yeah, this is the tool, this is how it works, and digging a little bit deeper into what's really going on behind the scenes and the different sort of features and differences of functionality. Completely vendor agnostic. There's a lot of shared common features between them. I think it helps the developers to understand that context and understand that information. Also, about configuring the tools, a lot of the more complicated tools have very different ways that it can be run, different modes they can be run in and understanding, that is also very important, even within the same organization, different products they're developing may be very different profiles. One may be a modern web application using all the latest frameworks, and one may be a very old monolithic Java application. They may need to think about what's the best way of running the tool on this particular type of application. Again, that's knowledge that you might be able to get from a particular vendor, depending on how good your vendor is. There's still a lot of commonality that it's important that developers actually understand it. I suppose the third key pillar was stepping back and saying at the end of the day, we are finding application security vulnerabilities here. Developers don't have any innate training and, okay, here's how you assess vulnerabilities, here's how vulnerability ratings are calculated, here's the sort of thought process that goes into deciding how bad is this vulnerability, how does it affect us. Another important part is trying to present them with that and try and help them, okay, here's the mindset to work through to evaluate one of these vulnerabilities, both from a generic perspective, and also thinking about the specific types that come out of each tool. Again, one type of tool is looking at vulnerabilities during code, one type of tool is looking at vulnerabilities in third-party code that is publicly announced but may not actually affect you. It's about how to how to approach that process and try and bring developers into that process. Like I say, the commonly held wisdom is you don't want to give developers 1000 findings, you want to do all the triage yourself first. Some organizations even that's not so realistically, maybe there's some basic triage to be done, but it will still fall to people whose primary day job and sort of immersion isn't in security. The moment we've got a one day version of the course schedules for virtual AppSecEU, which is happening in June, we also ran it past Jim Manico, and he really liked the idea and he put it in his catalogue as well. That's really interesting. The key thing from us is, we obviously very much love OWASP and the open resources model and thinking about what we can do to release this sort of information more widely. At the moment, it looks like, the main way we can do that is to support the exercises that will be part of the course, we're trying to prepare sort of worksheet templates to help people think through okay, here's what I need to think about if I'm implementing one of these tools, here's what I need to think about if I'm evaluating on these tools, here's what I need to think about if I'm evaluating vulnerabilities. The goal is to try and maybe release those more widely to try and build up this thought process and make it easier to access and make this information more available as well. We need to find a way of communicating this across, I think that we can't build a one size fits all, here's how everything works, here's how you will do it in your organization, every organization is different. We can certainly help developers with the ideas and the thought process behind it.
Chris Romeo 25:16
The idea of an implementation guide is something that doesn't exist in the OWASP world. We've got cheat sheets, which are issue-specific. But one of the things that I've done in the past in helping other companies build SDLs is to create implementation guides in the early days. It's a set of process steps and information. You could do an implementation guide about a SAST tool, specifically to say, here are the things that you have to do, here are things you have to look out for; it's almost like a cheat sheet, but a cheat sheet has a specific definition in the world of OWASP. This isn't a cheat sheet. That's neat to hear that you're thinking about that. It sounds like a well-needed topic that a lot of companies struggle with. They need that type of perspective. I want to change gears a little bit and understand what are some of the specific examples that you've seen of companies struggling with tools? Because I love to hear real-world case studies, and I think it helps other practitioners as well because there are probably some people that are going to hear what you're about to say and go, Oh, it's not just me. Everybody has this problem, too.
Josh Grossman 26:30
That's a great part of being a consultant, you get to see lots of different organizations, a lot of different environments, and you get to bring out war stories, you can anonymize, and I think it's useful to share. I've seen lots of different examples of this. A few key ones. There was an organization I was involved with where the QA team wanted to start using DAST. Someone suggested to the QA team, pure QA, not a security focus team, that they should start using a DAST tool because they were already effectively performing dynamic application testing; they had a whole QA suite of tests, automated manual to test a running application. This would be a very neat insert into their processes. Unfortunately, the big challenge there was that they had very unrealistic expectations about what they get out of the DAST, their entire life, they breathe bugs, they're like, Okay, we find bugs, we fix bugs, we find bugs, we send them to be fixed. They were very much completely focused on okay, are we finding bugs, are we finding bugs, are we finding bugs, and I was trying to walk them through, okay, well, this is what DAST does, and it will find certain vulnerabilities if they exist. It has to; you have to look at how much of the application we're covering, we have to look at what tests we want, we have to look at what bugs are interesting to us. It was very hard to shift them away from this mindset of does it find bugs or does it not find bugs. Often, you'll scan something with DAST, and it depends on the complexity of the application; they might not find it. DAST is very dependent on how well it manages to navigate the application and how much coverage it can get. In the end, I think they got tired of it. They said we're not seeing the bugs; we're not seeing loads of value from this. So we don't want to use it because I think they had unrealistic expectations upfront; I think maybe if they'd had slightly more understanding of this is what this tool does, this is the situation, more joint work between the QA and security teams, rather than being put on QA than it would have potentially been more successful. Another organization I was working to help with their backlog on their SAST tool, their secure code scanning tool. They'd had this process where they had a large list of code-level vulnerabilities they needed to fix. They were tracking metrics every month, Okay, now we've got 600, and now we've got 500. They were gradually burning down this backlog. I'm helping them with this, and one day, I'll go into the tool, and there are 1000s of vulnerabilities. I said to someone, what's going on. where did all these vulnerabilities come from? This is much, much more than what you've got in the metrics. They said, oh no, you're looking at the wrong view; you need to change the filter to that view. I was like, why? They said because that's the view that we use. I was like, why is that the view that you use? Well, that's the view that we've always used. I asked a few questions, and it became apparent that at some point in the lifecycle of this tool, it was decided they were going to focus on particular types of vulnerabilities and a particular set of vulnerabilities and that they had a filter put in to show them those vulnerabilities. That was it, and they were working those vulnerabilities, and years have passed. It's become tribal knowledge that this is the view that we use and heavy dragons pretty much for everything else because they hadn't thought upfront; okay, how are we going to work through this? At which stage do we want to take a more strict policy? None of that knowledge got passed down that has never been reviewed? Or is it still valid? Are we still looking at the right set of vulnerabilities? They were stuck on this original view that had been given years ago. Another good example is with software composition analysis; I've seen where organizations have so many different libraries being reported, and they've got to go through all these libraries, and even before they look at the vulnerabilities themselves, say, are we using this library? Are we not using this library? In this particular organization, they had the toolset to be quite sensitive, there are a lot of possible ways it could match a library, and often it would match on something and say, well, this file exists, so you must be using this library. It was a relatively common file that was in use with multiple libraries. They were spending a long time trying to get rid of false positives in libraries, never mind evaluating the vulnerabilities. Their concern, which is also valid, was they didn't want to start missing libraries; they wanted to be sensitive, and they wanted to start having libraries they were using but weren't getting detected for some reason. I think they were trying to push and say, Look, you need to stop for a second and decide, okay, how exactly are you going to tune this tool to get the right level of detail, go through all these libraries, get rid of the ones that are false positives, and then move on from there. If something new comes up, and hopefully, you've got a list to go through, and you can say, Okay, well, we're expecting this new library, because we had this new feature that we needed something new for it, or we're not expecting this new library, so where does it come from? Is it a false positive or not? It was interesting; one of the things that almost came out of that was that they were spending a lot of time on this, and there's some discussion, is this the right tool? Should we use different tools? There was a lot of pushback about well; if we change tools, suddenly, we'll have to redo all this work all over again, and we'll have to start from scratch. If you're spending this much time on it anyway, it may be that over the long term, another tool may make this less time. I've seen that, where they moved to a different tool and did some evaluation, they did some variation based on that they moved to a different tool, and they established that it was going to take them less time going forward. They've fallen into the sunk cost fallacy.
Chris Romeo 32:26
I was gonna say the sunk cost analysis is where you have to ask yourself the question, if I was starting brand new today, would I buy that tool again? Forget all the investment I've made, all the knowledge, and everything that I think I have, if I had to make the decision today, would I still buy it? If the answer is no, you've already made your decision move on, find a new tool.
Josh Grossman 32:47
Yeah, completely. When you put in this much effort into it on a periodic basis, then it's time to put some effort into evaluating is this really doing what I need it to do?
Chris Romeo 32:59
There's a lot of room for innovation still in the world of AppSec tools. I think we're really still in our infancy of what the tools can do, and where they can go. I look forward to the next few years, as these tools continue to get better and evolve. There's some new folks in a lot of these categories that are doing things faster and in a more innovative approach. It's going to be fun to watch them push the edge of the industry and drag some of the early companies that were part of this to either step up and match what they're doing or fall out of the whatever magic quadrant they sit in.
Josh Grossman 33:41
Completely. If you look at, one feature that we sort of, again, when I was talking through with Avi and Adi about the content, and we talked about one of the features in software composition analysis of reachability analysis. You've got this vulnerability in this library, based on the way you're using this library, would an end user, would an attacker be able get to that functionality in the first place? If you've got that, that can be a massive time saver, because that's a lot of analysis that suddenly you don't need to do manually and you can say automatically, well, there's just one vulnerability in this library, but it doesn't affect us. So it's not going to be our first priority, because right now, no one can get to it. Every tool supports that, and I'm sort of hesitant to talk about that. It's like, Well, does every tool support that now? Is that useful? But we wanted to talk about some of the new ones, other things as well, because these are things are going to be big, big time savers and big helps.
Chris Romeo 34:35
Push the envelope. I wouldn't buy an SCA tool today that didn't have reachability analysis. That'd be one of my first questions in the demo. If they said, Well, no, we don't, but we're working on it. Okay, next, bring me to the next demo. If I'm not vulnerable, why would I want to extend my resources and have people wasting development cycles trying to fix a problem that I don't even have? I got enough problems that I have; I don't need to fix the problems that I don't have. Josh, I'll throw one out here. I don't know if you have any more on your list, but I'll throw one out because you'll probably get a kick out of it. Maybe you can add it to your list of struggling with tools. I'm not going to identify anyone involved in this; I'm just going to tell the story generically. I was listening to a story about a static analysis deployment that a large enterprise had done. They were using one of the top industry providers of static analysis. They decided they wanted to deploy it in the cloud. They were going to put all of the services that were required to make this managed SAST scanning service for their internal enterprise operating in the cloud. The bill for AWS was more than the license they paid for their enterprise to do it. It took that many resources to do it. They had to double, more than double, their budget to make this manage scanning service because they needed so many resources in the cloud to even have a trickle of performance in being able to process lines of code. It made me chuckle, and it makes me think of that same category of struggling with tools. That was a struggle because you should be able to deliver SAST without needing more cloud services than the cost of your entire enterprise license. That was my take.
Josh Grossman 36:31
I've definitely seen that sort of thing as well, I think a lot of the time this sort of analysis is being done, can suddenly become very resource intensive, and very memory intensive. One thing that I saw, which is sort of on the border of software composition analysis is container scanning. A tool that is like, oh, we'll just ingest the whole container, will scan it, but it turns out, the containers are quite large. Once you start ingesting a few containers into this tool, it's a tool that gets quite unhappy quite fast.
Chris Romeo 37:01
That's another problem. When I think about and that's a whole other topic, we didn't even talk about CVA, container vulnerability analysis, but as a category, that speaks back to how do you build containers, people still build these big giant fat containers, and that's, once again, all three of us have been in security for a long time, we know, limit our interfaces from the very beginning, the less interfaces we have, the less way someone can compromise whatever it is that we're building. But the average container you see these days, are these model lift, gigantic monstrosities of, why do we need the entire Ubuntu operating system? Why do we need X Windows in our container? I don't know. We just it's always been how we've built it. We've always had X Windows in our container. Maybe we need a GUI in our container, maybe we want to access our container remotely over x. I don't know. That's a whole other conversation we could, I'm about to go on another diatribe. I better circle it back around here.
Robert Hurlbut 38:03
Josh, it has been great to talk to you. As we come to a close, what are some key things that you want people to know about or think about this topic? Maybe even a few key takeaways?
Josh Grossman 38:17
As we've seen, there's a lot we could talk about this; I could talk about this all day; I'm planning on doing that at the OWASP conference. There were a few key ideas that came out of this and key things that people can think about. Regardless, I think that it's important to be aware of and to have in mind; I think one of the first things that comes out is we think about, oh, we're going to buy this tool, it's going to cost us this much in license fee, and so we're going to spend this much. You forget that there's a lot of time cost of people's time to work with this tool, and you can look at license fee, but that's just one part of it. The amount of time you have to put into these tools is potentially higher than you'd expect. People might start screaming at me, but I think if you're using open-source where you have a little license fee, then potentially that's even higher, because potentially there's more work you have to do to get it put in and even automated tools incur a lot of manual work. You have to be ready for that and prepared for that and prepared to take that into account. How we can cut down that work and make it more efficient is a key goal of the course. I think we covered this as well, but I think it's important to highlight that because all this manual time, you can spend a lot of time on these tools, you can waste a lot of time, and it can become the developers, the face of security for development becomes Oh no, not more findings from these tools. If you don't have an efficient way of dealing with these findings and an efficient way of processing and that's just going to wreck developer morale, wreck developer attitude towards security, and make it a lot harder to promote security in general within a development team. There's a lot of value in terms of making more efficient processes so that you've got efficient processes. But I think there's a lot of value in a wider security context of how we want to operate our application security program in the organization. Part of that is going to be making sure there's not a massively negative view of security caused by frustration with these tools. The final thing to take into account as well is that a lot of these tools are very developer-focused, or they should be developer-focused, and we want developers to be using them; we have to realize that different people are going to have to solve different problems, and/or be involved in processes around these tools. DevOps people are going to have to be involved in and when you how do we put this into the automation process. Developers are going to have to be doing the analysis of the findings from the code and library perspective because they're the ones who know it best. You don't want security to end up getting gummed up on okay, how to run this tool, what to do with the findings, when a lot of the tasks might be developers as well, there is a certain amount of sharing the love that you want to do, making sure that the right people are engaged on the right topics. A lot of understanding, how we're going to build processes for this is understanding who's going to do this; a lot of the worksheets are about saying, well, who's going to be responsible for this part, who's responsible for that part, and making you think well, different personas who should be doing that. Obviously, as part of that, it also gives you an opportunity to say, well, we want developers to do some of this, and maybe we can push more of the vulnerability assessment and reviewing the vulnerabilities to developers, and that pushes into security champion territory where we want certain developers to be more familiar with security, which in itself is a good goal. You do have to make sure that it doesn't all fall on security, doesn't all fall on developers, but the right people are doing the right processes.
Chris Romeo 41:51
Very cool. Josh, thanks for sharing this insight with us on high value AppSec scanning programs, say hi to Avi and the rest of the Bounce team for us. I'll leave our audience with our key takeaway. OWASP Europe this year, the global is virtual, so you can sign up for Josh's class. Doesn't matter where you are on earth, it's going to be virtual for everybody. Check that out. Josh, great to talk to you again. We look forward to seeing you in person at a conference sometime soon. Maybe even it's Global AppSecUS in San Fran in October, November, when ever that is happening. Once again, great to see you. Thanks for sharing your insight and knowledge with us.
Josh Grossman 42:30
Thanks. Great to see you guys again. Thanks so much. Really great conversation.
Chris Romeo 42:36
Thanks for listening to the application security podcast. You'll find the show on Twitter @AppSecPodcast and on the web at www.securityjourney.com/resources/podcast. You can also find Chris on Twitter @edgeroute and Robert @RobertHurlbut. Remember, with application security; there are many paths but only one destination.