Application Security Podcast

Joern Freydank -- Security Design Anti Patterns Limit Security Debt

January 25, 2022

Show Notes

Joern Freydank is a Lead Cyber Security Engineer with more than 20 years of experience. He is currently establishing the Threat Modeling Program at a major insurance company. Joern joins us to talk about security design anti-patterns. He defines the term, explains security debt, reviews the categories of anti-patterns, and walks us through the example of a common role misconception. We hope you enjoy this conversation with...Joern Freydank.

For more from Joern, check out his talk, Security Design Anti-Patterns -- Creating Awareness to Limit Security Debt, from Global AppSec:


Chris (00:00):

Joern Freydank is a lead cybersecurity engineer with more than 20 years of experience. He is currently establishing the threat modeling program at a major insurance company. Joern joins us to talk about security design anti-patterns. He defines the term, explains security debt, reviews the categories of anti-patterns and walks us through an example of common role misconception. We hope you enjoyed this conversation with Joern Freydank.

Intro (00:29):

You're about to listen to the AppSec podcast. When you're done with this, be sure to check out our other show high five.

Chris (00:40):

Hey folks. Welcome them to another episode of the application security podcast. This is Chris Romeo, I'm the CEO of security journey and co-host of the podcast as well as an application security lover. Robert Hurlbut joins me today as well as my co-host. Hey Robert.  

Robert (00:56):

Hey Chris. Yeah, it's Robert, threat modeling architect and also a lover of application security.

Chris (01:02):

Another t-shirt we should have, we need a t-shirt archive. Like definitely. I don't know. There are so many things we say on this podcast that feels like they belong on t-shirts. I fear that nobody would actually buy one though, but maybe other than family members.

Oh, we're joined today by Joern Freydank and Joern is a person that I got connected to through OWASP global conference. I was going through and looking at some of the different talks that were happening and I read his title, I looked at slides and I said, this is somebody that the application secured, the audience wants to hear from, but Joern, before we get to our topic at hand, we have to always start with our guests’ origin story. So, how did you get into this crazy wild world of application security?

Joern (01:58):

Yeah. Well, thank you, Chris and Robert, for having me on the podcast. So, I started out at the end of the nineties as a software cracker, because you can't really put that in the resume, but that's how started out by removing basic dongles from software and training myself on the more, most expensive software then. One of the software programs, I showed to my colleague, and he became a manager on a security project that developed this certified hardware module that contained money for franking machines for stems. So, at that point, they developed the module that contained 2 million dollars, and that was the U.S. certified by the postal service. That got me in the door of the security space because that was different than everybody was expecting. At that point, I was like all open, not all secure, as no secret, and I got my head start in the security world based on that project.

Chris (03:04):

So, you came to security from more of the breaking perspective?

Joern (03:09):

I did, yep. Reversing assembly language, reversing code, looking at what is there, mapping out no go points, things like that.

Chris (03:24):

Now can you write assembly today? Still?

Joern (03:28):

<Laugh> I don't know that ability depreciates over time, but I was able to read a lot in a very fast way. You know that is more like that. Not sure if I can still do it, I didn't try. Yeah.

Chris (03:41):

I've never been able to write assembly, and probably not even read it either. So, how’d you go from hardware to AppSec then? How’d you make the transition to software this?

Joern (03:52):

This was a CC++ project, so it was software and hardware, both. Then it was early 2000, I switched to the bank in Germany. That's where I'm originally from worked a finance portal and then backend system for ATM manufacturers and then actually moved to the states. There was this what I consider the Silicon death valley of security, nothing really going on until mid-2014, between there were little on and off projects. Then, I'm working for a company that data analytics and that morph cybersecurity platform. Since that security on my resume, a lot of companies and then projects picked that up, from there. So that's, that's how I made the transition into that space and with the developer background that's, you know, was prone to do appsec.

Robert (04:38):

So, one of the topics that we're going to be talking about today are anti-patterns or security anti-pattern, design anti-patterns. Could you help us, and our listeners understand what is that? What is a security design anti-pattern?

Joern (04:52):

Yeah, so similar to when developers go out and fetch some code of stack overflow with all the vulnerabilities in that, you know, turn how to make some something run. There's another level to that. That is that they also copy-paste setups or patterns on higher-level, architectural patterns, between teams or projects or somebody on the internet, how certain things are done. With that, they also copy-paste the potential flaws and the things that need to be fixed into those projects. So that's a higher level, Panama it's, same thing, copy-paste or copying how certain things are done. The problem that we are facing is with the open-source that they usually stop at very, you know, once they run like on the very minimal level, like authentication for instance, and they don't go deeper because that's not what they're intending to address. That's usually the level of setup that you would typically see when developers use certain setups of components of the working together.  

Chris (06:02):

When I think stack overflow and because you mentioned it I'm going to use or kind of draw out that example a little bit more. I think one of the big challenges with stack overflow is that the code that gets copied and dropped into production applications, at the end of the day, has been known to have various flaws and vulnerabilities in the code that's put-on stack overflow. There are a few academic studies even that have gone out and looked at this and said, you know, just because something's the highest voted doesn't necessarily mean it's secure.  

Joern (06:37)


Chris (06:39)

And they’ve got lots of data to back that up. So, you're saying from security design anti-pattern is the same thing. Developers are going out and finding architectural blueprints for example. But so what you're saying is that those are flawed blueprints that they're finding and then they're building on top of.

Joern (06:55):

They might be flawed a little bit, but they're mostly lacking certain things, right? They do not fully consider all the security variations that we would have to consider. Because they, when they look at the blueprints, the biggest documented area is how to get this running, right? So, it’s the system they're looking at in the blueprint, not how to get this running in a secure way with all the variations.

Chris (07:18):

So, it's missing pieces versus bad architectural decisions that are being laid out on the page. It's just that the architects who are the source for these aren't even considering security and then developers are grabbing those and running with them. So they're ending up with something, it's the age-old problem of like, you know, where do security requirements come from and how do they fit in? They don't get considered early in the process, something gets built. It doesn't, it doesn't take into consideration, the basic security functionality that we need.

Joern (07:51):

Yep. That's the missing part. Then it becomes painful later, you know when they have to fix it. Right. That's they, you know, the missing feature shows basically, in the application.  

Robert (08:00):

You know, when I think of that one for one example, I think of, for a design pattern, a security pattern, and then anti-pattern. Pattern is you always authorize after you authenticate. So, you don't just assume that authentication is enough and the anti-pattern is you do that's enough, I've authenticated. That's basically the same as authorization let's go on. So, you can get yourself in trouble, by simply missing, as I understand what, how you're defining missing some key aspects. So, what you think maybe enough for a secure design is not. Simply because it's not correctly implemented or thought through and follow through on all the missing pieces that need to be in there.

Joern (08:48):

Yep. Exactly that. Yep. Agree.

Chris (08:52):

So, you also talk about security debt. So, this is one of those terms, it's one of those loaded terms, right? Because it's, it's technical debt, security debt, people throw it around all the time. One of my concerns with these types of terms is, I don't think people are always talking about the same thing. When you think of security debt, what's your definition of that? Then where does security debt come from? What are the sources?

Joern (09:19):

When I talk about security debt, I'm considering features, the security features of the system. I'm looking at a system and my frame of reference that the business systems implement business functions and the security then prevent threats to those business functions. The implementation of those controls, which as control is like a feature, right? The mitigation of those threats is a control implementation. When you delay this implementation though, over time, you are owing a workload item to the system to, to be implemented, to make it function properly. In the beginning, this might not make much of a difference, but then if the whole business unit grows or the system grows, that might have a big impact up to the point where you must throw the whole thing away and design the whole system. Because you can't retrofit it anymore. So, the retrofitting piece is actually the key point that you know, if you have debt, you might not be able to retrofit it. Or worst cases like the bankruptcy case or the equivalent is that management goal and ask how long does it take to fix something? And developers say, well, takes too long, well takes long. And then they come up with their own conclusion and say, well, this tech stack is something we can't fix maintain anymore. They throw it out, spitting up like a parallel project with the next promising tech stack and switching the whole team out.  

Chris (10:52):

That’s one of the things that I think about, and you just got me thinking about this in more depth. I'd never really thought about the fact that security debt is something or technical debt even is something that could eventually like die on the vine. I guess I'm an eternal optimist. So, I'm thinking like if it's security debt, we'll eventually take care of it over a certain period of time. Maybe that's an infinite period of time. Maybe that's the problem with me as an eternal optimist. The line is infinite to when we actually can close all that debt, but you're making me think about this a little differently in that there could be a time where we just declare security debt bankruptcy or security bankruptcy, and we throw everything out. But it almost seems like, are we any better with the next tech stack though in your mind?

Joern (11:39):

Sometimes, you are because some of those issues can be addressed. But like as a blanket pattern in a new tech stack, that think cross-site scripting, for instance, in Java or JSPs is totally a custom implementation in other languages that's built-in. So yeah, there are certain things, of course, that can be used in a newer tech stack. But then there are other things like GO, for instance, that don’t have a versioning system in terms of vulnerabilities. Well, now they do. But, that could also, they're revisiting all the issues that have been addressed in other languages or tech stacks before.

Chris (12:17):

Yeah. So, you almost end up with some trade-offs there between new tech stacks that are going to be able to eliminate some of your old tech debt, but you also have a scalability problem because if we're talking a team, you know, an engineering team size of 10, we can do anything. We can throw the tech stack out and we can rewrite this whole thing and go by the end of the week. If you're talking about a 25,000-person engineering team, it's going to take us 10 years to migrate to some other new upstanding language that has better solutions and things. So, there's definitely a trade-off there between scalability and being able to just declare security debt bankruptcy.

Joern (13:03):

Correct. Yep and that has to be baked in upfront. Right? That's why this whole threat modeling thing through it at the beginning is so important.

Robert (13:10):

Yeah, and I was thinking of a similar trade-off with also risk. What's your acceptance of risk? If you go along and you know, that you can't fix that security debt, or you're not fixing it, then you are potentially increasing your risk as you go along or accepting more risk as you go along. In a recent talk that you did, you listed some categories for these anti-patterns. Could you help us understand some of the ones that you've identified?

Joern (13:44):

So, in that oral speech, I was talking about groups of patterns and went through certain themes. One is the common role misconception is one they started out with, because it covers a lot of other patterns, technical patterns, then authorization anti-patterns, and something has to do with timing or time-related anti-patterns, and then systems that don't mix well scalability anti-patterns. Those are the areas, I'm sure they are more like in know somewhere I would, but that is kind of the scope of what I was talking about.

Chris (14:17):

So, I’m sure our audience has a perspective on kind of what these things are. Give us a one or two-sentence definition of each one of these starting with common role misconception and then just to kind of set the stage, so that people may be able to connect it with other things they already heard of.

Joern (14:37):

So the role misconception is I started out with that. I didn't know if it was an anti-pattern until I start describing that more. It's a conceptual anti-pattern of how the roles in the business are laid out, that has to do with the superuser role and the business role. The authorization anti-patterns come from the shift in the design from monolithic systems to web app web services deployed on multiple cluster kinds of systems. So, there are a lot of opportunities to drop authorization in different stages in that pattern. Time or timing-based anti-patterns. Now, I thought it was time and I used that term as a red flag term for communication and it has to do with time and deferred execution.

So, it's actually one pattern that covers a whole group of patterns. While I was describing that, I discovered that's actually the same pattern in different variations that surface in different areas. It has to do with somebody putting instruction into some form of code somewhere and then another system picks it up and there's a switch. Okay. That's what I call this time or execution-based anti-pattern. Systems that don't mix well, that's something that in the design phase, we talk to teams about that. There's a certain system that is hard to mix like that's just from a features perspective. Then of course there are the scalability anti-patterns. They are also systems that work currently, but then once you scale up, like do a lot more of it, it becomes an issue. That’s why I put on another that, I'm sure there are more categories but that's kind of like the natural flow of how I group them together.

Chris (16:38):

So how did you derive this list then? You know, I'm envisioning, like, did you have a dream one night? You just woke up, you're like, give me a pen and paper, kind of write these things down. Like this is the anti-pattern dream, or like, how'd you, how'd you get to this list?

Joern (16:51):

Good question. So, it wasn't a pen or paper. So, I was asked the question, why do you want to do threat modeling? Why don't we just do an assessment later on? I was like, well, if you do A, B, C, D, or if you miss A, B, C, D you'll experience pain later on. So, I started making a list of my phone, actually not on paper. I had my phone thinking, okay, next time when somebody asks me, I'm going to bring up one of those things to just ask them if they're using that tech stack or that pattern. It was not until I wrote up the proposal for the speech that I grouped them together. I can actually group that and look. So, that came out from a practical sense, just collecting them and putting them into that scale or the groupings that are pain points of what the developers would experience. You know, grouping kind of like listing authentication is not really direct pain, but, you know, future pain, I guess.

And the other ones in systems that mix well and scalability, that's like current pain, can't do it right. And defer pain later in the future. Well, that's how I group them together.

Robert (17:58):

So, talking about roles, we mentioned, you know, roles there's a malicious insider admin role. That I think sometimes can be overlooked. How does that play into some of these and what are your thoughts on that in terms of the different use cases and anti-patterns?

Joern (18:17):

Yeah. So, this is when you do a threat model, that threat model goes through different stages. At some point you end up with, I would call it like an 80% inventory threat model of everything that's there. Then you hit the stage where it's like the 80/20 rule, where you would add the threats to the system in threat use cases. Usually, that's started with an external malicious vector, and then there also though internal malicious vector. That is becoming more significant over the last years now, before there are two trends that contributed to that being very relevant. One is the offshore trend, right? So, you have the classic offshore that started like mid-2000 where they put the development teams in other countries and the value of data versus the salary is very high.

So, and that's one trend. So, you have a lot of people that potentially have access to data where the salary is high. Now in the Western world where salary is high and the data value is low, that trade-off is not that relevant. Then together within the second trend, is the CI/CD automation in the DevOps trend. Where you have a lot more people on top of developers that have those internal roles, those internal admin rules, for instance, that have access to those systems indirectly through automation scripts, or some other systems. Then with that, it's not, we are saying that the user per se is a bad person. We don't assume that, but statistically, we have to assume that one of their roles flips over or the system gets captured. Thinking, look for J time, right. Like going into the first hop and then what is actually the blast various or the access at that point. So that's why that malicious insider becomes more and more relevant.

Chris (20:14):

Yeah, and you think about, you know, to the couple of examples you shared, like the, you know, site reliability engineers that are responsible for the DevOps build pipeline, for example. A lot of times, we still don't focus enough on the trusted insider role, in this case, the malicious insider of role. You know, I know that when I'm thinking threat modeling, I'm often thinking outside in is my primary motivation. I think that's a good reminder here Joern that you're sharing about. We have to consider the inside out as well and start looking at what type of controls do, we have between the different functional people who are supporting a DevOps pipeline. For example, you know, because there’s a lot of organizations and I'll dare say most organizations where a malicious admin insider could do almost whatever they want at this stage, without anybody really detecting it.

You know, it's not like they're doing code reviews of configuration changes. And maybe somebody's doing that out there and listen, if you're out there and you're like, no, wait, my organization, we do code reviews of every configuration change that has anything to do with our DevOps pipeline. Send us a message. I'd love to interview you and understand how that works because it seems like that would be very slow going in a DevOps world. So, I think that the malicious insider is something that's that we need to pay more attention to, from a threat modeling perspective. I mean, Robert, from your threat modeling experience, is that somebody that you're looking at, the malicious insider, often or how does that fit into your world?  

Robert (21:49):

Occasionally certainly, I mean, like you said mostly from the outside end, but, but certainly in terms of processes, in terms of business processes that are really critical. Absolutely. You're looking at the inside because the thing it's sometimes we forget is that typically the insider is one of their tasks or one of their objectives is to try to look like an insider that you will completely ignore. So, if you look at the insider, you, you may think, well, we trust everybody here, but how do you know it's not an outsider that it's now posing as an insider. And so you do need to look at it and certainly critical operations are important in terms of those insider access and so forth.

Chris (22:36):

Okay. Joern dive into the common role misconception as an example here. So I want you to run through this one in a lot more depth and explain it to us closely. We're not going to do this for all of them. We are going to provide a link to Joern's global OWASP talk. So you can go listen to the full entire talk and hear the deeper explanation about each of these. But I did want to explore one of them, just so folks have an idea. What are they going to get when they go listen to the full talk?

Joern (23:04):

Yeah, thank you. So, this common misconception, as I mentioned before, is like a conceptual, it leads to the conceptual anti-pattern. That comes from the question. Often a take point of view, which role is the most valuable from the take point of view? If you ask that question to the developer, they just say, well, yeah, the name of the game is, become rude. You know, every capture the flag event is aimed at that. Now, if you think about value though, is the assist admin route certainly has the ability to access computing power that leads to value like hijacking or crypto mining or something like that, and also some data. But then, since we are securing a business function, there's usually a business function that's higher. In real-world terms, if you go and call into the company and want a refund for your credit card transaction, there's this power user talk to the customer service representative.

I call it the power user or business fairy, the person that can make it rain right, has direct access to money and also direct access to a lot of data. They can, for instance, rename you in the system if you get married or something, they have access to a lot of broad data and potentially kick off direct transactions. In that sense, to capture that role is actually the more realistic goal where TECA would go and try to get access to one of those roles, either by social engineering or by flipping the system that runs that role. So, that the failure then to plan for this higher-level role, that's an anti that I identified which has an impact then on certain other areas for implementation. For instance, when you have to consider this, you have to plan for a role that's higher than a sus admin and consider the sus admin role as a helper role in the middle, not the highest top-level business function role.  

Robert (25:06):

What’s the corresponding pattern essentially? There's an anti-pattern, so what’s the corresponding pattern?

Joern (25:10):

So, the pattern where this surface’s first off, the zero trust architecture pattern, right to double-check and double-check that somebody is running in a certain role. Then this whole anti-pattern deals with scoping the blast radios. If somebody gets access to a system can actually run in the highest level role. So, for instance, think about the refund system. If you consider the highest level, the route role would have access to keys of the mounted systems, but they don't have to have that access to those keys, to all of them. For instance, you would definitely separate signing keys for transactions from the key storage that deals with the systems that interact with each other internally. The outcome of that would be that you have maybe dual responsibility for that vault that holds the transaction signing key and require like business manager together with the admin to go in and change something around so that not a single person can do that by themselves.

So, you have to think about what that means. If you give an admin access to those systems that run those business functions. In the idle case, the business systems just run and don't have visibility in anything that's critical, but of course, it's not the case because secrets are multi-systems. But then not all secrets should be molded equally across the whole backplane then, right? You have to separate that out. So, for instance, separating the key spaces out, that's one of those panels you would have to think about by the business users or by those critical business roles. The other thing is that breaking glass, whenever somebody logs into those applications that deals with production data or transactions, that somebody gets notified that is it's relevant from the business unit. That's another pattern that you can use for this.

Chris (27:12):

So, as we start to kind of bring this conversation towards a closing part, I wanted to explore a little bit about what can we do to limit security debt. So, you've talked about this catalog of anti-patterns, how do we put this into action, for somebody who's listening to this saying, Hey, I've got, you know, 250 developers in my organization. I've got 10 architects that drive this. I'm trying to get us to a better security posture. What do they do with these things that you've built here to help limit security debt?

Joern (27:50):

Yeah. The biggest actually is awareness, right? You have to know about it, that those things exist. That means that you create a place for it, where that is documented and confluence. A single pattern that deals with smaller scoped systems, like a queue, for instance, those patterns, those descriptions can go into the threat library, if you do threat modeling, for instance. The rest can go into a Wiki or conference page or you can create some startup threat models that have those patterns in them. Now, the one thing that's relatively important is that when those things are documented, those patterns, that you have to create like an easy entry for the developers that don't have that abstraction level knowledge. So, they like a jump page with red-flag terms like queue, timing. Something they get reminded of to look at and then read and see and check off the pattern if it's actually relevant to them. That helps create trust that they can actually identify it.

Then the next thing is that's on us as security personnel. We need to get feedback from developers of what is hard to implement. So, I would say the complexity is not the same, so we have to track it and somehow ask them, Hey, do your libraries now support A, B, C, D, for instance, end to end payload encryption. It's definitely harder because it involves two parts with mounted keys and all kinds of things in the middle that you don't lose visibility then. If that, for whatever reason would come up early on, you have to ask that would you ever require endpoint encryption, then please put that in right away. That's one of those things, just knowing from the complexity that they have to do that early on, just put a placeholder in their code or something like this that there's already there.  

On the operation side, we have a trend, when to doing infrastructure service with modules, for instance, we can have services that are fully configured. Think about if you download like a SQL database appliance, they usually have already scripts in there that separates the regular user from the power user sorry, the admin, and you can build your own platform services accordingly now, too, with required fully configured services. That's one of those more practical things you can do. Or use like pre-configured infrastructure service modules that work across multiple different modules when they work together. That's another more practical approach, like having a preconfigured queue, for instance, that turns certain things off, or when you're dealing with interaction with our systems, once they include other modules, then you already have a set of pre-configured variables that have to be filled in order to function properly.

Chris (31:07):

So, there's an awareness at play in that we need developers and architects to understand, you know, what are all these different categories of things to consider. I heard you talk about there are inputs to other processes like threat modeling. Having these things that people can use, whether that's with template threat models and then there's also just reference kind of service approaches where you've got a series of best practices that are being applied maybe in a standard way with a template or something so that you’re countering an anti-pattern with a pattern that provides me with all the things I need to be successful. You know, it's that default security approach, which, you know, throughout my career, I've watched, as we get closer and closer to that. Taking away the options for people to make insecure choices, for example, when rolling out a new service. I've seen a lot of progress in 25 years, but we're funny thing is we're still not there. I don't know that we'll ever get to the point where it's like, there is no insecure option, but we can certainly dream. So, what Joern do you see as a call to action or a key takeaway to our audience here? Like, what do you want them to do as a result of learning about security design anti-patterns and security debt? I'm hoping it's not the declare security debt bankruptcy. I'm hoping that is not your key takeaway.

Joern (32:32):

Well, no, <laugh> because we already talk about it. So it's a good start, right? We need to spread the message. That's the key takeaway it's, Hey, there's a different level, right? That that we need to address. It's not the little bitty threats that everybody's kind of throwing around already. There's a bigger scope and we need to address that in a more organized fashion. For instance, you know, like, but we need the feedback from the developers for this. We can't do that as security developers or security personnel, because we don't know what's all out there, right? So, we need that feedback loop for them to point and say, Hey, what about this? Where does this fall into play? You know, where is this a pattern? So, we need to raise that awareness and right. Document it somewhere maybe an OWASP project, I don't know, think about that.  

The categorization is a bigger challenge because you need to have the data in multiple variations for us before you categorize it. Right. My scope is also limited to my career, so we need the community to feed into that too. What would be a good categorization? You know, topology for this in order to address, then point to the right, correct patterns, right? That's kind of the goal there, you name the internet patterns and then you point to the ones that would fix certain things. So, I would say documentation and awareness is a good start and asking feedback that we need that feedback from the community for this.

Chris (34:09):

Very good Joern. Thank you for taking the time to share the security design anti-patterns approach and security debt, and you know, all the experiences and things that you've shared with us here. Thank you for providing that for our audience. And you know, they're call to action is to go dive deeper into this topic. So once again, thank you for the time today. Thank you for the education and keep chasing down these security design anti-patterns.

Joern (34:34):

Will do. Thank you. And thank you for having me on the show. Thank you.

Chris (34:49)

Thanks for listening to the application security podcast. You'll find the show on Twitter, @appsec podcast, and on the web at www.securityjourneycom/resources/podcast. You can also find Chris on Twitter @edgeroute and Robert @Roberthurlbut. Remember with application security, there are many paths but only one destination.

Need more information about Security Journey? Get in touch.

Ready to start your journey?

Book a Demo