Chris Romeo 00:00
Hey folks, for this episode, Robert and I decided to talk about an article I wrote called DevOps security culture: 12 fails your team can learn from. We hope you enjoy this walkthrough of the 12 fails. If we missed any, hit us up on Twitter and let us know what we should add to the list. At Security Journey, we believe security is every developers' job. We work with our customers to help them build long-term sustainable security culture amongst all their developers. Our approach is to provide security education that's conversational quick, hands-on, and fun. We don't do lectures. Instead, we let the experts talk about what's important. Modules are quick, 10 to 20 minutes in length. We believe in hands-on experiments, builder and breaker style that allow your developers to put what they learned into action. And lastly, fun. Training doesn't have to be boring. We make it engaging and fun for the developers, visit www.securityjourney.com to sign up for a free trial of the security dojo.
Robert Hurlbut 01:12
Hey, folks, welcome to another episode of the Application Security Podcast. This is Robert Hurlburt. I'm a Threat Modeling architect and co-host of the Application Security Podcast. I'm joined here today by Chris Romeo. Chris, nice to have you.
Chris Romeo 01:28
Thanks. This is Chris Romeo, CEO of security journey. It's odd Robert to be sitting on perhaps this side of the microphone, I guess it's the same side of the microphone I always sit on. Feels like the other side, given that we're about to talk about DevSecOps, and you're going to interview me in this process.
Robert Hurlbut 01:50
Recently, I noticed that you wrote an article on DevOps or DevSecOps, however, we may want to take a look at it. I really liked a number of things that you mentioned in here. We were talking about, hey, what's something that we can think about and talk about, and this is the article that came back for me, it's just wow, there's a lot of really great stuff in here. Here we are today, to take a look at it and talk about it.
Chris Romeo 02:15
I got the opportunity to write this article for TechBeacon, and it's called DevOps security culture: 12 fails your team can learn from. In this article, I poked a little bit of fun at a number of things in the DevOps world. Hopefully, I won't get any hate mail at the end of this, but if I do, it's okay; I can take it. Security culture, for me, is a big part of how we change how an organization approaches security. If we're going to do that at the speed of DevOps, we got to be thinking about what is a DevOps security culture. A couple of different facets that I talked about in the article that I'll mention here; when I think about what is a DevOps security culture, it starts with knowledge. I mean, we need to have our developers and operations and all the people that are working together, they got to have knowledge, they got to learn and understand how to build secure software fast. We always think DevOps is about building software fast. But if we don't have that secure in there, we've got some other problems. DevOps security culture is also about the experience about how do we improve the process. How do we get the tools to be even better? There's a bit of art and creativity into this as well. We want people to have the ability to be innovative and come up with new security ideas, and not just do status quo type of stuff. We know DevOps is about a bit of science because whatever matters gets measured. How many times have we heard that said in the last 100 years? Probably at least 100. In the world, it's this connection between knowledge, experience, art, creativity, and science. These are all things that are the core pieces of a DevOps security culture.
Robert Hurlbut 03:56
That makes sense. Diving into the article, the first thing you talked about, which I just mentioned a moment ago about, this idea of the naming. Tell us about that, the DevSecOps or DevOpsSec, or whatever it may be. Talk about that.
Chris Romeo 04:15
I'm not the first person to make this joke about a security fail in the world of DevOps being driven by name and brand. We created this term DevSecOps, as an industry. We really did create a monster here because you'll hear people say, DevSecOps, SecDevOps, SecOps, SecOps with Dev, Dev with a side of security. It's become almost a distraction for us as an industry. We spent all this time thinking about the perfect marketing term. Let's just call it DevOps and teach everyone that there's no DevOps without security integrated. I'm not the one who came up with that, Julian Vehent, who's been a guest on the podcast before, that was a tweet from him probably five years ago, and I took a screen grab of it and saved it in my archive because that is priceless that he came up with that whole idea, maybe it wasn't five years ago, maybe a little less, but just a priceless idea. We don't need to worry about the name and the brand behind this. That's all marketing. DevOps should have security integrated at every step of the way. That should be the only way we ever think about it.
Robert Hurlbut 05:19
Makes sense. I do like the DevSecOps swear jar. I need to put at least a few dollars already because I've said that several times already in this podcast.
Chris Romeo 05:29
That's okay. In the article, I talked about how to change culture as well for each of these because that's a big part, for me, is it's not just about pinpointing fails that exist in something, it's more about, what are we going to do to change this. That's one of the things I said is, tongue in cheek, I said, create a DevSecOps swear jar, and anytime somebody uses that term, make them throw in a quarter, or $1, whatever you want it to be. It's really a cultural play here for name and brand, it's about, we should just call it DevOps. We all know, it has security built into it.
Robert Hurlbut 06:03
Makes sense. Number two, you mentioned the Infinity graph. So what is that?
Chris Romeo 06:11
The Infinity graph, I'm going to throw it out there. Somebody created this idea of an infinity to show DevOps as this never-ending thing that always goes on forever. The challenge is that it is not how DevOps works. If that was the case, we'd never push to production if it was just a continuous loop that kept going around. DevOps is a pipeline; that's the word that we use to describe this. It's a pipeline visualization; code comes in on one side, it makes its way through the process, and it goes through a whole bunch of tools. If everything goes as expected, that code gets pushed to production at the end of the day. There's no infinity graph here. Of all the ones that I'm going to poke some fun at here, this is going to be the biggest one; people got to stop using the Infinity graph. I'm guilty of it, too. I used it in the beginning; I was like, Ooh, fascinating, what a great way to show DevOps; the more I think about it, the more I talk with practitioners in the field, it's like, No, we got to ban the Infinity graph from our world. Use a pipeline illustration, it makes sense for what it's doing, and people can understand it.
Robert Hurlbut 07:20
That helps them think about a beginning and an end of sorts I'm starting and then I do deploy something. It's not a continual, I'm never getting off this train.
Chris Romeo 07:31
Yeah. The other frustration I have with the Infinity graph is, that I saw so many pictures where people would take their standard DevOps infinity graph, and they'd write security and put a box around it. Then they would say, Look, we have security in our DevOps. No, you don't; you just wrote security and put it in a box. I see the game you're playing here. That's why I love pipeline visualization. I'm expecting to see security pieces on that pipeline visualization. Then I can say, oh, yeah, this is a DevOps that has security properly built-in.
Robert Hurlbut 08:04
Makes sense. Number three, security as a special team and lack of collaboration. That's a big one.
Chris Romeo 08:13
Alright, security people out there, I'm talking to you now. Get ready to be potentially offended, depending on how long you've been in security because I'm guilty of this as well. That's the funny part about this. A lot of these, I'm coming back to, going I've been in security for a couple of decades now, and I'm guilty of having this mindset of security as a special team. We have all the answers about security, and did you not know that I'm from the security team? Look, I have a badge that says security team on it. Of course, I have all the answers. That attitude and I'm pointing at myself like the rest of my security friends around the industry, that's gotten us in trouble because we're known as people that don't want to collaborate. That's a fail if we think of ourselves as a special group of people with all the answers, and we don't want to collaborate with the developers we need to influence. We're pushing forth this continued approach of siloed organizations and stuff, and it's not a good way to work in the future. The modern way we work right now should not be about everybody having their lane; don't step in my lane, that's my lane. No, let's all work together to build secure software.
Robert Hurlbut 09:20
One sentence you mentioned, is another failure with security teams is having team members who do not know how to code. That's an interesting one because of course, my background is coding. I've done that for many years and switched over to doing more software security, application security. But I've also seen back and forth on that in Twitterville and other places, should you have to know how to code to be a good security person, talk about that a little bit.
Chris Romeo 09:56
I've come to the conclusion now that if you want to play in the application security or software security space, you got to know how to code; I'm sorry, you have to. You're trying to influence people who spend the bulk of their day doing what? Coding. You're going to go and try to talk to them about a particular feature or some change they should make from a design perspective. They're going to say, Hey, look at the code; you're going to go, Sorry, I don't understand, I don't know how that works. A big thing for me in 2021, and I started talking about this in 2020, and I'm going to keep talking about it in 2022, is developer empathy. We, as security people, have to start seeing things through the eyes of the people we're trying to help, and we're trying to influence. We got to stop saying; we're security, we have all the answers, we don't need to know how to code, you do all the coding, I do the checking of the box things at the end. That doesn't work; we have to say, how does this new security thing I'm trying to do? How does this impact the developers? If I could get every security person to go spend a week job shadowing a developer, we would have a different approach to application security as an industry because those people would go, holy cow, we make you run that, we make you do this with all these tools, and then this garbage comes out, and you have to siphon through all this stuff. When you start to see, your eyes start to open to what the developers are going through. I think security, being able to code is a part of that, is being able to say, Hey, I care about helping your mission enough as a developer that I'm willing to spend the time to invest to ensure that I understand the basics of the language and object-oriented concepts and things, I'm an example. I've been in security for almost 25 years at this point. You don't want my code running in production. You don't; you're not going to be like, Hey, Chris, you're the guy that needs to write this new feature. But I understand the concepts behind code; I understand object-orientation. I've written code in Java, I've developed in Ruby on Rails, and I've developed in some other languages that are older than those, perhaps in the past. I'm not a great coder, but I can follow you. If you're walking me through a code example, I know what you're talking about. If you're like, hey, from a security perspective, what should I do here? I've seen enough to do that. I'm not saying you have to become a senior developer to be successful in application security; I'm saying you've got to put the time in to support the people that you're trying to influence. That's what I summarize, as developer empathy is walking a mile in those developers' shoes and putting forth the effort so that they're like, oh, you know what, these security people care about our mission because they took the time to figure out how to code, at enough of a level that we can have intelligent conversations.
Robert Hurlbut 12:50
Alright. Number four vendor defined DevOps.
Chris Romeo 12:53
All right, vendors out there, get ready to get mad at me. Robert, what's your email address in case they need to send us something? As I look across the industry, and I've used a couple of different cloud providers, I'm realizing there's no DevOps standard, there's no IETF, and there's no RFC for DevOps. Who gets to define DevOps and how it works? Do you know who does that? It's all the vendors that provide DevOps solutions; they define what DevOps is. If you're getting your cloud from a certain cloud provider, they have a view of what DevOps is. It's funny how it always comes back to their product space, though; DevOps is defined by the products solutions they can stitch together to provide you with this experience. My advice to practitioners that are new to DevOps and integrating security is that the vendors don't have to be who sets your agenda as far as what DevOps is, you can figure it out from a best of breed perspective, and you may find a cloud provider, and you may say they have a best of breed solution from end to end. Great. Don't think that there's a DevOps standard, and they're somehow implementing it. No, they're putting their products together in such a way to try to provide you with a solution. Remember, there is no DevOps standard out there; you can define your own DevOps, and the vendors; you don't have to let them define it for you in your context.
Robert Hurlbut 14:23
Yeah, I like that you mentioned there's no DevOps standard. I never thought about that until I saw that. We hear things but that's where we're getting a lot of this is from the vendors defining that, it makes sense. Number five marketing term infatuation; all the terms, lots and lots of terms, it seems.
Chris Romeo 14:44
We've got shift left, we've got shift, I heard someone say shift up and shift down, and apparently, we're shifting in every direction. I wrote this article before our interview with Jim Routh, the previous interview on The AppSec podcast if you want to go back, and if you haven't heard it yet. He provided the best definition of shift left as far as what it is, so much to the point that I'm going to skip over that and say, No, it's not a marketing term. Go listen to how Jim defined it because he changed my thinking for something I would have said, this is a marketing term. Yes, I believe in the idea of starting security on the left. Another person, Matt Coles, I heard him say that a long time ago, he's like, it's not shift left, it start left. From a secure development lifecycle perspective, shifting left was a marketing term, start left is the idea of what we want to do until Jim changed my thinking about that. When I think about the fact that we do sometimes get caught up in these marketing terms, that's what I'm throwing that out as a fail; being infatuated with these terms doesn't change the context of the security in your product. Start left; we want to build security in from the beginning. That's what we used to call it back in the 2000s, build security in; it's the same idea. Then we got some vendors who said, Oh, shift left is cool. Why don't we shift right? These are the people providing solutions that work in production, which we need those tools; it's all good. But the way I think about it is meeting in the middle; we call that secure development lifecycle. It's starting left; it's starting right. It's a process; what is old is new again, but it's still secure development lifecycle at the end of the day.
Robert Hurlbut 16:29
Number six big-company envy. I like this acronym, FAANG.
Chris Romeo 16:37
I didn't come up with that. That's other people smarter than me in the industry that said, Facebook, Amazon, Apple, Netflix, Google, we want to refer to them as a group using this FAANG, this one's all about, a lot of times people will look at that, and they'll go, Well, I heard a talk about how Netflix is doing this and with DevOps, and like, wow, they're just killing it. They've been at it for a long time. If you're brand new to this, a fail is for you to have big company envy and go, ooh, they got it all figured out. Maybe they do, but that doesn't change the day for your program. Your program is not going to be the same as what Netflix is doing, Facebook, Amazon, Apple, or any of these people. You're a different company with different requirements and different perspectives. I'm saying don't have that big company envy.
Robert Hurlbut 17:24
Makes sense. Number seven over complicated pipelines and doing everything now.
Chris Romeo 17:31
When you think about DevOps and using pipelines, it is easy, very early on, to say, we got to have everything in this, we have to have all of the categories of tools, we might need to have two of some of the categories, make sure we've got all the right coverage. Overcomplicating anything sets you up for failure. I believe in the Keep It Simple method; I apply it to everything I do in life, which means I apply it to everything I do in security. If I'm going to build a pipeline, I'm going to start with a simple pipeline with one or two security tools. Once we get that rolling, then we can add more, we can adapt it, but why take everything and throw it all together in a bucket and say, Let's hope it kind of works out in the end. That's the doing everything now part of this as well as saying, Well, we have to have SAST, and DAST. We have to have SCA, and we have to have other four-letter acronyms that we haven't even invented yet included in our pipeline. When we do that, we build complexity, and complexity is ultimately less secure and simple at the end of the day. You're never going to convince me that a complex system is more secure than a simple system. Because if it's simple, I can explain it to everyone, and I can explain what's happening from a security perspective. When you try to explain your complicated system, it takes you an hour, and you're still not past the high-level architecture, trying to tell me all the pieces. Keep it simple; don't fall into that trap of saying you got to do everything now. You can phase an approach to DevOps where you build different pieces of security as you go.
Robert Hurlbut 19:03
This next one, number eight security as gatekeeper. That's similar to the other but this one in particular. We are the ones we're the WAS IT department of No, we stop the buck stops here type of thing.
Chris Romeo 19:16
Yep. That is a fail for a reason; with security as gatekeeper, that is something that you don't want to do. DevOps is set up to go fast and deploy often. If you're standing in the way with some manual process, all you're doing is one; you're taking away all the joy from your development team because they want to move at a DevOps pace. They want to build cool software and have it tested and watch it go to production. If you try to stand there and be the roadblock, all you're going to do is build animosity between security and development, which we've already had enough of that classically; historically, we don't need more of it. Let's try to find solutions where we all work together. Once again, developer empathy, and don't try to be the person that's running the gates saying, Hey, if you want to get to production, you got to go through us. Work with the team to get the pipeline set up to the point where you've got the security tools you need to gain the assurance about the build; then, you can move at DevOps speed.
Robert Hurlbut 20:12
Number nine, noisy security tools. How are they noisy? Are they making lots of noise? Lots of stuff, right?
Chris Romeo 20:21
There's no data center anymore. In a cloud world, I can't even go to a data center to hear any of the noise. It's nothing to do with that. It's as far as the output. I've told this example, probably 100 times I don't care, I'm gonna tell it again. But what does the average team do whenever they get a new security tool? How do they configure the policy for a brand new security tool? Let me just throw out there, I spent six figures on this tool, how do they configure it?
Robert Hurlbut 20:46
Let's lock everything down. Look for everything and spit it all out.
Chris Romeo 20:51
Look for everything is the key phrase; look for everything, and you know what happens? What do we send the developer? We send them a 10,000 line report, that's 116 pages of all the deficiencies in the code that they just committed. What does the developer do with that? They have a bonfire when we're not looking. Nobody's reading 116 pages of findings that came out of this. That's a fail to have these noisy security tools, and you can have noisy security tools later in the lifecycle. You could change policies and make it bad. But I see this happen a lot, where people start with a new tool, they're like, I'm going to get my money's worth, I paid six figures for this, we're checking for everything, because we're going to get down to a penny per finding. What they've forgotten about is the cultural element of the developers are going to hate this tool from the first time it runs in the pipeline. They are going to do everything they can to work around this tool to minimize the impact it has on their flow. If we get a new tool, and we install a minimal policy, we can build this thing called trust between the developers and the security team, where they're like, hey, they threw that tool, and it didn't overwhelm me, it found one or two things. But when I went and looked in the code, that problem was there so that I could fix it and our codes more secure. Now you've got some trust built in that tool. When we increase the policy a little bit, you've made some deposits with those developers. So they're like, Okay, they turn the policy up, it's returning a few more things. But they were so good about what the original findings were that they were sending to me that I'm okay with it. Now you're changing culture because it's not an us versus them. It's a we working together on how we're going to find security problems in our code.
Robert Hurlbut 22:31
I do like that part where you say, tune the existing security tools. That takes some time. Like I said, you can't just buy it and throw it out there, turn everything on and just let it run. It does take some time. There's some effort to be able to make sure that you find the things and set the things that makes sense and that are going to never waste anyone's time with those findings that don't matter. Alright, number 10, I think one of our favorites, lack of threat modeling. Not the lack of threat modeling is our favorite.
Chris Romeo 23:10
It's a fail if you don't do it. Can you beleive it? I put it all the way down at number 10? Wow, what was I thinking?
Robert Hurlbut 23:15
I know, where is it? Where is it? Oh, there it is.
Chris Romeo 23:17
It should have been the number one pick on this list. Well, I had an order that I used as far as how I was trying to flow down from the high-level industry, down into the business, down into the nitty-gritty details. I think Threat Modeling is a nitty-gritty detail. But I think that's a fail not to do Threat Modeling. Some people will say, well, DevOps doesn't support threat modeling because it moves so fast. Wrong answer. You can threat model anything; it comes down to where you fit it into your cycle, into your process. You and I both love threat modeling. We want to see Threat Modeling be something that's integrated into the pipeline; maybe you got to put Threat Modeling more at the, someone grabs a new feature, and they start to think about what they're going to do with it. That's when they threat model, then they start to codel, then they commit their code, and then it goes in the DevOps pipeline. See, DevOps and threat modeling do work together; they're not mutually exclusive.
Robert Hurlbut 24:12
I like the reference to our Threat Modeling manifesto that's out there, too. Definitely check that out if you haven't already.
Chris Romeo 24:20
Yep. I'll have another article coming out next month for TechBeacon talking about an insider's guide to the threat modeling manifesto, and how you can actually use this to take it to the next level and roll it out in your enterprise.
Robert Hurlbut 24:32
Very cool. Number 11 vulnerable code that's in the wild.
Chris Romeo 24:37
This is the classic, open-source, third-party software that's got vulnerabilities built-in, and nobody knows about it. We barely even have to talk about this here for our audience that lives, eats, and breathes application security. They're running their software composition analysis tools, whether that's open-source from dependency check from OWASP or whether they're running a commercial tool. They're getting this one. I just wanted to throw it in there as a fail to have a DevOps and not be checking for this vulnerable code that's in the wild.
Robert Hurlbut 25:09
Recently that's very relevant. Just all the kinds of things that can come in and impact our companies and what we're trusting and so forth. Very relevant. Number 12 lack of security retrospective.
Chris Romeo 25:23
What? No drumroll? Come on; this is number 12. All right, lack of security retrospectives. So it's a fail if security doesn't lead the charge into DevOps, saying we need to look at any security issues we've had and perform a retrospective and figure out what went wrong, a blameless retrospective, security people, I'm talking to you, and I'm talking to myself, too. But I'm specifically talking to us as security people because, for a lot of years, we'd love to point fingers at people and pass the blame and say, Okay, who's responsible for this? It's not me because I'm from security; it's one of the people here. You don't have a retrospective where anybody grows and where the product gets better, where the process gets better if you go in pointing the finger. A fail is when we don't have a security retrospective, a positive impact of this is to leave the blame at the door, figure out how to grow together, and always practice security retrospectives when we have security failures, just like we would with any other failure, let's get better from it. Let's not try to pass judgment or pass blame. Those are my 12 fails. There's probably more out there. If you got more, hey, hit us up on Twitter, tell us what they are so that we can add them to our list here.
Robert Hurlbut 26:38
Excellent. All right. Chris, thank you. Great article and hope that our folks listening will go check it out as well. There's a lot of more interesting things in there as well that you can take a look at, some more details, but DevOps security culture 12 fails your team can learn from. Good talking to you about it, Chris. Thank you.
Chris Romeo 26:59
All right. Thanks, Robert. Thanks for listening to the Application Security Podcast. You'll find the show on Twitter @AppSecPodcast or on the web at www.securityjourney.com/application-security-podcast. You can also find Chris on Twitter @edgeroute and Robert @RobertHurlbut. Remember, security is a journey, not a destination