Application Security Podcast

Neil Matalall -- AppSec at Scale

February 9, 2022
Season
9
Episode
15

Show Notes

Neil Matatall is an engineer with a background in security. He has previously worked at GitHub and Twitter and is a co-founder of Loco Moco Product Security Conference. Neil joins us for his second visit, to discuss account security at scale. He describes the underlying principles behind security at scale, how he worked to build a sign-in analysis feature, and how attacks were detected. We ended the conversation with an authentication lightning round, with Neil responding to various statements about authentication off the cuff! We hope you enjoy this episode with Neil Matatall.

Check out our previous conversation with Neil Matatall.
https://www.buzzsprout.com/1730684/8122595-neil-matatall-content-security-policy

Transcript

Neil Matatall -- AppSec at Scale

SPEAKERS

Chris Romeo, Robert Hurlbut, Neil Matatall

Chris Romeo  00:00

Neal Matatall is an engineer with a background in security. He's previously worked at GitHub and Twitter and is a co-founder of the Loco Moco Product Security Conference. Neal joins us for his second visit to the podcast to discuss account security at scale. He describes the underlying principles behind security at scale, how he worked to build a sign-in analysis feature and how attacks were detected with that feature. We ended the conversation with an authentication lightning round with Neal responding to various statements, true or untrue, about authentication completely off the cuff. We hope you enjoyed this episode with Neil Matatall.

You're about to listen to the AppSec podcast. When you're done with this, be sure to check out our other show Hi/5.

Chris Romeo  00:49

Hey, folks, welcome to another episode of the Application Security Podcast. This is Chris Romeo, CEO of Security Journey and co-host of said podcast. I'm also joined by Robert who appears to be dressed for winter. Robert, are you in a cold area or something? Where are you in?

Robert Hurlbut  01:05

We're getting some snow here this weekend. So yeah, it's been very cold. Robert Hurlbut, good to be here as well. Threat modeling architect and really looking forward to our conversation again with a new guest, or actually a returning guest.

Chris Romeo  01:24

Returning guest, who is in a place where they don't know what snow actually is. But, at least I don't think they do, maybe one time it's snowed. But yeah, Neil had a chance to be with us in a previous conversation, we'll put a link to that in the show notes so you can go back and listen to the first conversation. But today we're going to talk about account security. Since Neil's been here before, you can listen to the other episode for his origin story, but he's gonna tell us a different story as a way to get started in this. So, Neil, I understand you are at GitHub, which when I always think about like, companies that are doing security and doing it at scale with a lot of different users, a lot of load and stuff that's going on there. I'd love for you to just give us kind of the background story on account security and what you were dealing with a GitHub.

Neil Matatall  02:14

Sure. Just one quick clarification. This is not a well-known fact, is Hawaii very much knows about snow. We have a mountain called Mauna kea, which literally means White Mountain because it's frequently covered with snow just because of the altitude, fun little fact. Not a little correction, but definitely not intuitive. But anyway, I really enjoyed my time and GitHub. I think the time where I was working on the account security story was probably the best part of my entire career. I was really glad that I got to do it. That kind of is a little hint at the culture of GitHub back then where ownership of things was very nebulous and if you wanted to work on anything in the product, you could. I just joined sort of, as they had introduced managers, as they had introduced teams, as opposed to just like everybody does everything. So, there was officially a security team, and then eventually, you know, we were getting good at stopping access,  we were getting good at stopping SQL injection, we had a pretty good story around kind of like the classical appsec problems. But we had a pretty big problem with our authentication stack, not that it had vulnerabilities in it that could be bypassed, but it wasn't doing a whole lot to protect people. Now, the classic response is to use two factor authentication, right like that, that'll make your account secure. It'll make you know, so if your password is leaked, it won't be used. But telling everyone to use two-factor authentication for a lot of products. Not all, but probably most is not a realistic scenario. A lot of people think of GitHub as having a very technical user base and I do think that that is more true than most places. But we have plenty of people who are not technically savvy using GitHub every day. Maybe they're not writing code, they're still using GitHub and again, just using two FA totally dismisses how difficult two-factor authentication actually is. I think people that would be listening to this podcast might have a different opinion about the difficulty of using two FA since it's more likely that we are using that on all the services we use. But telling everyone to use two FA is just not practical today. Just to give you an exact number, GitHubs technical user base still only enrolled about 15% in two-factor authentication. So, the vast majority of people were not using two factor and we're pretty confident in at number too, because we limited it to you know accounts of a certain age that have been active in certain periods. So, not just like these bot or spam accounts or accounts that just sign up and never use the thing again. So, it's a pretty accurate number in our opinion. We had to think, Okay, two FA is not the answer. Where do we go from here?

Chris Romeo  04:58

Yeah, that's a good reminder. I often forget this, and I'm sure a lot of people that are listening to this forget to your point that we just take two FA MFA, we just, of course, we do that. I mean, but we're security people, we're not the normal user profile. So, that's a great reminder right off the start of this conversation that our users are not as security savvy as we are. If you just assume that they're going to embrace whatever security control or security technology or things that you're going to put in front of them, you're opening your system up to a whole plethora of threats that, you know, there are other things that you can do. So, the system's got to protect the users, especially those non-security savvy users. Yeah, so any kind of other thoughts on that, from that perspective about the user experience there?

Neil Matatall  05:52

Yeah, I mean, it has to be perfect, or else, it'll be terrible. You know, we even had a big problem with just the way we did two FA setup where people would kind of abandon the process halfway through, and then they'd be left in this sort of state where they had like, downloaded their new codes, but they weren't actually valid. Since they had overwritten their previous installation, they just got locked out of their account. There are just so many ways it can go wrong, and human beings do various things. Some people share phones together, some people, you know, just use like a burner phone, some people just seem to reformat their phone on a weekly basis and lose credentials all the time. We can't tell people to stop doing this. So, we have to make their life easier. I think, you know, GitHub did a lot of things and they're still doing things today, I think, I don't know, if you saw they recently released, push-based authentication if you have a GitHub mobile app installed. That's not as strong as webauthn. But it's certainly convenient and if you have trouble retaining, your OTP, your one-time password codes in your application, for example, this is a great backup. I can talk a little bit more about that in detail if you want, but there was a lot of thought put into the backup scenario, because recovery is more important than like, having the ability to use two FA.

Chris Romeo  07:12

Yeah, I think we'll unpack that as we get a little deeper here. But still, I'm pondering that quote, you said, if it's not perfect, it's terrible.

07:21

It's just I mean, well, I guess, I guess, did I say that? I don't remember saying something like that

Chris Romeo  07:28

Well, I was gonna borrow it and use it as the name of my third book. Perfect, it's terrible.

Neil Matatall  07:33

Well two FA is not perfect, and therefore it is terrible in the context of securing all your users. How about that?

Robert Hurlbut  07:39

Well, my take on it was that you're saying that you know, your customers, they want to feel like, they want to be successful. Right? They want to feel like, Oh, you give me something to do. Okay, I want to be successful at doing what you're asking me to do. But if it's terrible, they will feel like they failed. But they won't look at themselves as failures, they will look at the company that's imposing this as failing them if that makes sense. That's how I took it. I don't know if that was your intent, or in terms of trying to get it to be perfect. Otherwise, it's bad.

Neil Matatall  08:13

Well, it's also like, so many people have been turned off two FA because of some bad experience they had somewhere. You know, like some people, which never use it again. Even speaking to the security group, like I've seen people in the security industries where the Twitter account got popped, and it's like, well, you didn't have two FA. It's just a terrible experience, and I do you think things like webauthn are going to help in this scenario, but I think we're still years and years away from ubiquity.

Chris Romeo  08:42

Before we go to the next question, I know Roberts got another question. But we've mentioned webauthn, two times. So, let's get a definition in front of everybody as far as What do you mean when you say webauthn?

Neil Matatall  08:52

So the primary protection that I'm referring to when I talk about web auth, is that you have a credential that is bound to an origin that is enforced by the browser. So, you know, if with two FA you get a text message, or you have an app that you enter a code, but there's nothing protecting you from typing that code into a third party site, like a phishing site. Two FA does not protect you from fishing, it just makes an extra step in the fishing process. But if you were to have a webauthn credential, which could be a security key, like Phyto key for UB keys are very, very common, but also nowadays, your web browser can act as a credential. So, you know, face ID, for example, can be an origin-bound credential that you can use to log into GitHub, for example, and it's phishing proof. There's just absolutely no way it's ever going to happen. The system was designed very, very well. You know, it's never gonna happen is always a stupid thing to say, but I'm saying this is the strongest protection we have available to us today. Barring a bug in the specs or the implementations of these, you know, honestly, I think the browsers are going to take over the share of the web authan. landscape because everybody has them in their pockets at this point. It's just a matter of like, I think Android support is a little bit off. You know, even with Windows Hello, you can do it as well. These systems are very cryptographically, strong, and fairly usable.

Robert Hurlbut  10:18

You've sort of think spoken to this in terms of, you know, making sure that customers feel good about their experience, and so forth. But what were some of the other underlying principles that GitHub was trying to apply? Or think through as they made decisions about how to improve some of these ways of authenticating users?

Neil Matatall  10:40

So, we always had to balance friction with return. You know, we could sit here and have the most draconian sign-in process with 19 steps, and you have to like mail in a verification letter. Then no one would ever want to use GitHub. You know, I've been in situations where things like, you know, first-page render, or first click to action, were so important that you could never even consider things like friction. But GitHub, I think definitely was on the, we erred on the side of a little bit of friction is okay, but let's not do this, just because it's friction, let's think about this. Let's try to not just, you know, do the base thing, and really try to focus on the experience of everyone. This really leads me to our first principle, which actually seems a little bit counterintuitive, based on what I just said. But we didn't want to make a lot of security choices, we just wanted to make you do the thing or not do the thing you can opt into two FA, but that's pretty much the only bit of security, you can opt into on GitHub because we just do it. Also, things like captures have been historically banned at companies I've been at because people think that they're just a horrible user experience. You know, they're not going to slow down someone using some sort of like Turk system to just like manually brute force all these things. But they're pretty darn good against preventing lazy attacks, and lazy attacks are incredibly successful if you're not doing anything about them. Yet, as I said, GitHub wasn't doing anything about them. So, underlying principle number one is, we're just going to do the right thing for everybody and we're not going to ask them to opt into anything. Because we didn't give anyone the choice, as I said, we really had to focus on the experience. So, principle number two was, we can't overload our support team with hate mail. So you know, if people are getting very upset with the things we're doing, we should react to them, and we should listen to them and we should respond to them. We should still do what we think is best and every once in a while, there was some kind of hard decisions that led to, you know, special situations around like, for example, us responding to the support tickets, because the support team did get overwhelmed at one point was one of the changes we made. So, think about how it's going to affect our support team as much as it's going to affect the people using the product. So, you know, don't give them the choice, but listen to them.

Chris Romeo  13:13

So, how did you build this sign-in analysis piece? Because when I think about GitHub, it's not like five users a second or something are coming into GitHub, This is monumental, you know, a system that's delivering and sure you're not authenticating people on every request or anything like that. But it's still it's a level of scale above what a lot of people have ever even experienced. So, how did you build this analysis system to be able to determine if something bad was happening?

Neil Matatall  13:45

Yeah, I think it's a pretty interesting thing because I think it's definitely something that's repeatable in any size organization. We did run into some issues with the scaling of some of these things, and you know, I did bring GitHub sign-in down more than once in the process of doing this, but it's actually very, like, rudimentary. There's no AI involved. There's no like, tuning of models involved. It's a very, very basic system and it's kind of the foundation, it originally came from the idea of like, okay, I work out of the US, I don't really use a VPN, but I have, and my VPN is, you know, spits me out in France or something. If I sign in from South Africa, that's probably not normal. So, the idea was to send an email sort of a reactive process, just like hey, like, you might want to review the sign and it was kind of the first step we took. Just to do that, all it took was every time you sign in, we just create a database record. It tracks you know, the IP address, your what we call it, device ID cookie and it's you know, tied to your user session, which is also like a database object in our in our infrastructure. And that works great unless you work in Europe, where you might sign in from Spain and go work in France and travel through some exam because there's no country in between them. But a typical person in Europe might travel through multiple countries a day and borders are a little bit of an imperfect science, but they're okay science in that regard. So, if you change IP addresses, we would create another record of that too. And again, it would be all tied to the session. You can actually see us in the GitHub UI, if you'd like to drill down into an individual session, it has been to multiple countries, we have a very poorly drawn map, but it is the map that we use in the product. So, you can geographically see where your session has been. So again, if you just happen to sign in on a specific spot to where your train was that day, even though you go through that path every single day, we don't want to tell you about that. I think that product was well received. Most people are pretty appreciative of it and because we had put some thought into the different use cases of people who do cross borders, it didn't seem to really upset anyone. No one was really like, all these notifications are low quality, you know, why doesn't GitHub know I come here all the time sort of deal. So, you know, we had to put a little bit of thought into abuse, you know, we put a limit to the number of individual records that can be tied to an individual session, because we had bots that are just going over hundreds of 1000s of IPS every second and that was a bit of a problem. We also have people who will just sign in incessantly millions of times a day so eventually, like if you're signing in from an unknown location, we just don't really need to track that session at the time. So from there, I might be skipping ahead a little bit here, but I think this is all related here. I mentioned the device ID cookie as well. This is a unique value that's generated for every browser session, it sticks around forever. You know, if you have multiple users sharing the same computer, they will all share that device ID. Obviously, if you come from an incognito browser, it's a fresh cookie every time. Most people actually don't come in from a fresh browser, from what we saw, like most people will sign-in on the same computer multiple times, and they won't clear their cookies. There are people who use incognito every single day. So, but they are very much in the drastic minority, I want to say something like at least 60% of sign-in would come in from a known device. And so we thought, like, okay, 60% of sign-in coming from no device, but that's something to 40% come from an unknown device, like, do we really want to send out emails for 40% of sign-ins? Like that's not going to be a very good user experience? So we thought, like, well, what if we just, you know, add some friction in that case. The first thing we jumped to was email-based signing challenges. So, if you're going to sign in from an unknown device, we'll send you an email with a code that you don't have to enter into the browser to sign in very common implementation across the internet. Certainly, you see this even at some of the smaller companies, too. I do think it's becoming a standard, I'd never read the NIST standards, I wouldn't be surprised if you know, email-based challenges based on suspicious logins are just a standard now, but that was the most impactful thing we ever did. It was also the hardest thing we ever did. I did say that the Incognito people are the minority, but there are still millions of them. If you're getting an email every time you sign in from the same IP address, that's really, really annoying. So, we kind of said, Okay, if you don't recognize the device, but we recognize the IP, you've signed in from here before, not just you've been in here before, we will allow you to bypass that. That was most of the app, we had a lot a few other things like there are a few other criteria that were like, Oh, you bypass that. That was only in the transition. A year or so after we had implemented this, we very basic rules, you know, if it's not from a device or an IP, you recognize you're going to get an email challenge. That was not something that was out of band that was like, as soon as you hit login, we're going to run all this analysis and check. So, it had to be incredibly fast and we went through a few iterations of tuning it and the way we were querying the data, but eventually, it worked out pretty well. The performance impact was just negligible at best. That really helped prevent the password spraying attacks, you know, which. So, before we put in this device challenge system, we had all this data, we're just watching mass account takeover just happen now. Is every anomalous sign in an account takeover situation? Absolutely not. But when you see that number spike, like 10x, for hours on end, that's not something normal. That's basically someone taking a password dump from a third-party site and trying it on GitHub, and being incredibly successful. Thankfully, our system was able to stand up to that increase inactivity. So, we can Just see these massive account takeovers happening just on the regular. But as soon as you put this device verification in place, among other things we did as well virtually eliminating it via the web.

Chris Romeo  20:13

Yeah, the thing I love that you're bringing out in this conversation is the fact that you guys didn't just come to this problem going, Hey, we're security and we're going to do these various things about authentication, the way that the requirements say we have to and like you said, the NIST standard, we're gonna grab the NIST standard, and we're all going to implement this. You step back and you said, Hey, let's build usable security. Let's focus in on not disrupting, not making the user shake their fist and say, the security people, you know. We still see that even in this modern day where there isn't that collaboration between Dev and security, where there's like, hey, let's work together to find the best way to serve our users so that they don't hate us because of a security feature we're trying to put in place. That's all, as you're describing this, I'm like, that is exactly how you guys were doing this year, you were putting the user first and ensuring that you balanced how much extra effort they were going to have to put in to maintain their security. I'm guessing you didn't hear from a lot of people who were screaming very mad about, you know, having to do device verifications, and stuff like that, but I'm sure there was always a percentage. I'm gonna guess it wasn't 20 or 30% of people that were saying it people were, you know, just absorbing it, because most people were only going to hit it once, you know, once or twice in a couple of months period.  

Neil Matatall  21:37

Yeah, I think it was, on average, maybe something like, when the dust settled, and things were in a steady-state. I think it was maybe like one in every three sign-in was challenged. If you had two FA, you can bypass this entirely. But yeah, it really was, I think, like a, you know, a feel-good example of like, this wasn't the security team telling someone else to do something. This was the security team doing something in partnership with support. So, I was talking to support daily, every time we wanted to make a change. We say like, you know, based on your hunch, like, do you think this is going to be a rough change. The allowance for the Incognito users that I mentioned, was not in our original plan. We were going to put our hard foot down and say like, you know, if you're an incognito user you're signing in every day, probably okay, doing two factor or something, you know, like, you can bypass this with SMS is way faster than email, by the way. So, maybe you should consider that. Not that everyone should go their phone numbers. But yeah, we had to acquiesce and we saw the support tickets drop back to a reasonable level. People will say very mean things on the internet and they will say very mean things to support. And there were a lot of people that were very upset about this move. But I think actually a different change, that was also part of the story to actually upset people even more. That was when we started banning, you know, compromised passwords from like, Have I been pwned, for example. We started with have I been pwned. And, you know, again, because we have all this telemetry, it's like, Whoa, look, there's a strange correlation between a high number of anomalous sign-in and sign-in that seem to be using passwords in this have I been pwned? dataset. I think there's a lot of like, well what if you ban 500 million passwords, what if it's 500 millionth and one, do you go 600 million? Do you keep going? The answer is yes. If you're using an actually randomly generated password that's strong enough, like it's not going to be in this dataset. We've since integrated other datasets that I believe get their information from the FBI. Anyways, crazy, very successful. While the device verification stuff was more impactful, we still saw a big amount of you know, what could have been malicious sign-ins blocked. This is another story about usability and balancing security too. Our initial thought was, I don't remember our initial thought was, but the initial rollout is, we're not going to block people from signing-in these passwords. We're just going to let them know about it. Well, how do you let them know about it? You send them an email. Oh, like, No, I don't think that's the best way to do it. How about a banner at the top of the page? That Okay, that's reasonable. Now, should we allow people to dismiss this banner? Hmm. I don't know. So, we decided not to. We make this banner, big, red, and obnoxious people will want to change it. I think that's a good idea. So, we got the hate mail of why are you broadcasting to my co-workers and everyone sitting around me that have a bad password? This is really embarrassing. You know, how dare you? Then we respond with you know, like, well, this is what happened. Some people would say like, I use this password everywhere. How dare you make me change it?

Robert Hurlbut  24:55

I've used this for twenty years, why do I need to change it now?

Neil Matatall  24:59

Exactly, or, you know, there are some people who straight up gave us their passwords in email, which was ridiculous. Some people would, you know, refuse to admit that it was used anywhere else. Some people were like, you know, the passwords I generated in my head are super strong. Well, then how do we know it? You know, it was a little tough. Like, I definitely had to kind of like make a turn on my empathy power, like to the max, when I was reading these things, it was definitely like a little bit difficult for me, I'm not really used to that kind of exposure. But you know, like, some of it was valid. So some people, like our original message was something like your password sucks. That was like, well, it's not really accurate. It's more than your password is found on a third-party site. Then we had to tweak the language some more to say, like, no, like, we aren't talking to any third-party sites, we're not sharing your password with other people. We didn't use the have I been pwned API, we pulled down the data and ingested it, and queried it through a database. So, there were no privacy concerns at all, it was really hard to convey that in a universally understood way because the GitHub site is only in English and that's not our user base. So, you know, we iterated on it, we got to a place where we seem to get less hate mail. But, you know, we were still seeing people's accounts get popped through these methods. This was actually before the device verification codes. This was when tap account takeovers were still pretty rampant. We decided to decide for you can no longer use a password we consider compromise on the site, if you try to log in will force you to change it. Again, that sent another round of hate mail. But you know, it really, really helped get our users into a better state. So, when we started ingesting the new data feeds from the FBI sources, we had to also think about if your password was good yesterday, and it's no longer good today, and we're forcing you to change your password today. That's inconvenient. So, if your password was in the new data set, we set basically like a 30-day countdown, where you still get the banner, you can still use the site. But after that 30 days, we're gonna force you to do the reset. That was really, I think, a crucial step in making this much less painful. Because there was much less surprise, people had plenty of time to take action upon it and unless you're a bot that can't read, you did take action on it.

Robert Hurlbut  27:20

So you've talked about quite a few ways of detecting the different attacks and so forth. But did you see any trends or shifts? And did that change the strategy in some ways?

Neil Matatall  27:34

Yeah, I definitely kind of hinted at that and said that he would stop password spraying attacks against the website, but GitHub has an API, GitHub has a git interface, and both of those accepted passwords at the time. Given the success with things like device verification and the compromised password ingestion, we had to make a decision. What do we do for the API? Do we ban bad passwords first? Or do we just go straight to no passwords at all? That took me I think, almost two years to convince people it was the right thing to do. Two years ago, I wanted to get rid of passwords on the API before these projects even start. Two years later, when we're in the middle of all this, we come to this decision. It was really like it's time to just get rid of passwords in the API. Not even have this intermediate step, not have to think about like notifications because humans don't see API responses for the most part. So, how do we put a banner in an API response? So yeah, we sent out communications. We would email people, let them know that they're using a deprecated form of authentication. We had two brownouts. So, we just temporarily turned off support in advance of actually turning it off to give people sort of a simulation of what would have happened, we did it you know, so one would happen in, you know, the Europe workday, and one would happen in the US workday. Yeah, so the reason we did that is because as soon as we fix the web, we saw all the traffic just shift to the API. And it's like, oh, look, there's a ton of successful API calls and anomalous level at a sustained rate for a couple of hours. I know what that looks like. It’s literally like the translation was almost exact. If you think about it, it's way more efficient to test passwords in an API, because you're not rendering a whole page not only on all that HTML, like that's where I would have been doing it from day one. So, that deprecation period was very long because it was disruptive. But in the end, when we turned it off, it was probably our least controversial change ever. We got no hate mail. For the most part, the internet mostly celebrated this as a good thing. It felt very good. So, we had all the confidence to just do the same exact thing for the Git interface. The git interface was definitely less of a threat because it's a little bit harder to test credentials via Git, but it was still an avenue that someone could use a password dump to try out. The same thing, we were very scared, we had a long deprecation period and lots of communication, turned it off and nobody cared. At that point, you know, a leaked password was not very useful.

Chris Romeo  30:17

So, using the API example, let's play this one out a little bit because I think it would be beneficial for our listeners to understand. So, with the API, GitHub, you banned passwords. What did you replace it with? And why did you replace it with that? And how is that new thing secure?

Neil Matatall  30:35

Oh, yes. So, GitHub offered two ways to authenticate to API, three ways for Git. But for the API, specifically, you could use a password or you can use what's called a Personal Access Token. If you're not a lot that you can use an access token too. The Personal Access Tokens are, you know, 160 Bits, securely randomly generated, it's might not be stronger than your password, but it's stronger than practically anyone's password. It looks like an SHA one. So it's very kind of convenient. I think it actually is. Yeah, the idea is that these two credentials are interchangeable for the most part. For funny, kind of funny historical reasons, GitHub actually had an API that required passwords, so that you could create Personal Access Tokens. As soon as I learned about that, I definitely said that does not seem like a good idea at all. Using a credential to create more credentials is kind of a little bit scary in that sense. But if we're going to remove password support from the API, that means that API is going away, too. So, there was a couple of cases where we definitely had to, like just remove support for certain API's couldn’t support it, there was some that kind of had to be re-architected, there's some that kind of required introduction of what's called OAuth device verification, which is a little bit more like authorizing a client versus like authorizing an application. So, all these things had to be done in support of it, because we couldn't just like turn things off without offering alternative solutions. Now, a Personal Access Token, if that gets leaked, that's a pretty darn powerful thing, depending on the scopes that have been granted to it. I guess that's another difference is that a password can't have scopes, it's everything, no token, or Personal Access Token can be limited to what it can do. So, that's very powerful in itself and GitHub also recently released support for adding expiration dates to these Personal Access Tokens. So, you know, if you don't just leave it on some server 10 years later, it's still valid.

Chris Romeo  32:39

When we summarize kind of the value of the path, the Personal Access Token, it is the fact that it's generated using a strong cryptographic function. So, there's a certain amount of entropy or randomness in the token itself. So, the user isn't able to set it themselves, they can't put password 1234 something, the system generates a strong cryptographic artifact. You still have some of the same challenges on the other side of the transaction where that if you don't protect the path, for example, in a container, if you expose it in a running Docker container, as an environment variable or something, you could potentially have some of the same challenges, you would have the password. It gets you like, sits maybe 60, or 70% of the way better than a password, I don't think I would say it's 100% better, because I've got the other side, not the GitHub side, but the client-side still has to deal with it as a credential. But it is definitely better than letting the user create a credential and then potentially create something weak.

Neil Matatall  33:40

Yeah, I do think if this account security project kind of kept going in the direction it was going, I think we would have, you know, things like third-party OAuth. Pat compromises were kind of like the next big thing that we wanted to solve. If you go into integration and you give a path to a third party, you know, they might lose it, they might give it to somebody else. These things are kind of very powerful and a Pat's also better and a password in the way that if you accidentally committed to a GitHub repo, and its public will revoke it immediately. So, we would scan all the incoming commits for things that looked like Personal Access Tokens and because we can actually verify what they are, we can decide to revoke them or not. Which was very convenient and that's a service they offer to other providers as well. But obviously, like I can't tell if a Slack token is valid, I can just tell you, if it looks like a Slack token.

Chris Romeo  34:32

Nah, that's neat that GitHub detects that as they come in and commit and then immediately shuts them off. That's the security behind the scenes making the world a bit more secure place. There's nothing I didn't have to take an action I don't have to like oh, let me just push the button like it just happened behind the scenes. Well, we got to kind of come to a conclusion in our conversation here. So, we're gonna do a little lightning round like that. Maybe the second time we've ever done a lightning round, the last time we tried it the lightning round went for a few minutes per answer. This is a bit of an adventure. But we've got a couple of statements about authentication and so I'm going to read these out, and you're gonna have to give us like 15 or 20 seconds like, right to the point like, how are you going to refute this statement? Or maybe you agree with it? I don't know. Well, let's give it a shot and have some fun here. So, the first one says, SMS two FA is not secure.

Neil Matatall  35:23

Two FA over SMS is secure enough for most people, but not every application, your threat model matters. Normal people can use SMS.

Chris Romeo  35:32

Okay, wow, that's the best lightning round answer I've ever heard in the history of the Application Security Podcast. The next one says webauthn is for everyone.

Neil Matatall  35:41

Web authoring used to be very inaccessible. But nowadays with things like your iOS browser and your Windows Hello and other systems making this free and easy and accessible, that is going to change in two years.

Chris Romeo  35:55

Okay, requiring a phone number backup is bad for privacy.

Neil Matatall  35:59

It's so good for recovery, though in recovery is such a big problem. Personally, it's okay for me.

Chris Romeo  36:09

Okay, and then we have storing OTP, one-time password codes in one password aren’t two FA.

Neil Matatall  36:18

Oh, that's kind of a tougher one. It is two FA even though the credentials are in one place, they're still doing two separate things. One Password had a really good write-up on this and I definitely agreed with their opinion but I'm having a hard trouble staying tight now though.

Chris Romeo  36:34

Especially in a lightning round, where you only have like five seconds of, you know, the pressure is you're in the pressure cooker here. So final one, not requiring two FA to disable two FA is always wrong. I got an opinion on this. But I'm going to let you go.

Neil Matatall  36:48

So one thing, sorry, this has to be a longer answer. But one thing that we talked about at GitHub was if you have a session, you're logged in, you're able to disable two FA without having to do a two FA challenge. We had discussed adding a to FA challenge in the past and based on the implementation it was actually more difficult than we originally thought. So we kind of scrapped the idea for a little while. Then we actually thought about well, this is actually really good for recovery. If I lost my phone and I have a session somewhere and support can tell me about that and I can realize that I'm logged into my iPad and disabled two FA. That's really convenient for me. We actually never got any data on how often I was used, we've even considered productizing to the point where it's like, hey, like we noticed you're having trouble signing in, you're logged in here, would you like to approve that sign-in on the other machine. But that felt a little bit weird. It was like if you need to disable two FA and you have a session, you can do it there. Now, if someone has access to your machine, and they're able to disable two FA, I think you have bigger problems.

Chris Romeo  37:48

Yeah, I was gonna argue it the other way. But I'll go with your answer here because you focused on is user focus security has kind of been the angle that we've been discussing here. My answer I'm just saying, of course, they have to do two FA to disable two FA is maybe more of a classic security answer that makes it harder for the user to be successful using the security feature. So, Neil, we thank you again, for your second visit to the Application Security Podcast. I'm definitely brainstorming a new show called the lightning round with Neil. That's a working title. I'm working through it a little bit kind of, you know, brainstorming kind of creatively working on it. But that appreciates your history at GitHub with account security and the insights that you were willing to share with us, but then also just knocking out some of those things that people throw out. Like they're just all truths about SMS webauthn. and stuff was great. So, thanks for taking the time and we'll look forward to a future conversation. We'll find something else cool to talk about. We didn't even get to security at scale, which is something else we want to talk about. So maybe in the future, but Neil, thanks for your time today.

Neil Matatall  38:47

Thanks for having me. Really enjoyed it.

Chris Romeo  38:49

Thanks for listening to the application security podcast. You'll find the show on Twitter, @appsec podcast, and on the web at www.securityjourneycom/resources/podcast. You can also find Chris on Twitter @edgeroute and Robert @Roberthurlbut. Remember with application security, there are many paths but only one destination.

Need more information about Security Journey? Get in touch.

Ready to start your journey?

Book a Demo