DeconstructSeattle, WA - Thu & Fri, Apr 23-24 2020

← Back to 2019 talks


(Editor's note: transcripts don't do talks justice. This transcript is useful for searching and reference, but we recommend watching the video rather than reading the transcript alone! For a reader of typical speed, reading this will take 15% less time than watching the video, but you'll miss out on body language and the speaker's slides!)

[APPLAUSE] Hello, everyone. It's always good to play like technology difficulties with like an audience of hundreds of people watching you as you desperately try to find the play menu. But I do hope you're enjoying the conference so far. If you're a coffee person, I know it's a little early in the morning, hopefully you managed to grab a coffee earlier. I am not a coffee person. But I am pretty high energy, so hopefully by the end of this talk everyone will be awake, coffee or not.

I'm going to talk a little bit today about two factor authentication. I'm not going to talk about how you should use it. I generally think that's true, but there are situations in which it doesn't make the most sense. Instead, I'm going to talk about two factor authentication when it's being used in the real world. So this will mean when it's being used by your friends and your family or maybe by your users.

And the sorts of problems that they're going to run into, the sorts of usability issues that become a problem when you're asking people for a second factor. But I wanted to start by telling you a story. Story starts many years ago back when I was in university, and I had a friend and he ran an online learning site. And I had an account on this site. And one day we were chatting and we were talking about password security.

And I described the general shape of the passwords that I tend to use. I said my passwords a word, followed by a number, then a word and maybe a special character. And my friend was starting to get interested in security, and he took this as a challenge. So his site hashed passwords in a way that was considered generally secure at the time. It was sorted so it's a slightly different password hashing algorithm per user. And it used char 1 which is a fairly fast hash.

What this means is it's very quick to tell whether or not a password is correct. But you can't just know what it is by looking at the database. So he used a dictionary and generated every possible password he thought could ever be mine based on the shape that I described. And he ran that dump against his own database. And he found a match. So he thought he knew what my password was.

But he had a problem, how is he going to tell if this was the password that I used on other sites? Well, he did what anyone would do. He tried to log into my Facebook account. And it worked. Yeah, I shared a password between his site and Facebook so he managed to get in.

Unknown to him, though, was that I had notifications for new logins turned on on Facebook. So I was sitting at home one day and I got an email saying, someone's logged into your Facebook account. It definitely wasn't me. So I go off, I change my password. And then I take a look at this email. And it's got a location on it.

Now, there's a couple of different location issues generally when you're based in Australia, but in this case I was lucky and it actually had a very accurate geographic location on it. But it was just like some random suburb of Sydney. It's not a suburb that has any data centers in it. It's just residential.

And it's a suburb I only have one friend in. So I sent my friend a message and we had a very awkward conversation. Now, luckily enough for me, he wasn't trying to do any harm. He didn't look at any of my pictures as far as I know, he didn't change any of my information. But it still felt really violating. It feels weird to have someone walk around your house or your online account without your permission, especially when you don't know that they're going to be there.

But unfortunately, this situation is actually fairly common in the real world. We typically call it an account takeover in the industry. And it's when someone other than the legitimate user of an account manages to log in and take action on it. My story, it's a very realistic description of how most accounts are compromised these days.

You have a less secure site. It's compromised somehow. It's password database is leaked, and then those passwords are used on a more secure site. For me, my childhood was a little bit ruined when my Neopets password was leaked online in plain text. But the thing is this isn't really necessarily how we usually think about this problem. We often think that our passwords are maybe going to be brute forced.

And if you look online for advice about how to stop this from happening to you, there's really only two things you'll come across. One of them is that you should use a unique password on every site. And that's absolutely true. If you use a unique password on every site, this sort of attack can't be run against you.

But it's not really very realistic advice. If you talk to different password manager companies, you'll get a lot of wildly varying answers. But generally it's accepted that people have between 90 and 190 accounts online these days. It's not realistic to expect users to remember a unique password for more than 100 different services that they use.

Sure, they could use a password manager but that's expensive and for most users it's not a realistic solution. It's technically difficult and they're not willing to spend the money necessary to actually do that. Up on the screen, these are just some of the services that I've used in the last 10 years that have actually had breaches that have compromised their password databases and required users to reset their passwords. It's just not a particularly realistic solution.

The other piece of advice you'll run into is to set up two factor authentication. So let's talk about that. Let's really talk about 2FA. And we're going to do this in two stages. First of all, we're going to talk about the different types of two FA and the different usability and security trade-offs associated with them. And then we're going to talk about how to figure out which form of 2FA is right for you and maybe the other people that you might be talking to.

But let's start with what it actually is. You've probably seen it before in your bank. Looks a little bit like this if you log in with Google. And essentially when you log in from a new device, after you enter your password, you're prompted for a second credential. This might be an SMS to your phone. It could be a number generated by an app.

But notably, these methods of authentication, your password, and this other code, they need to be independent of one another. Sometimes I see people ask, why can't I just get a message to my email account that contains a number in it? And the answer is because you can probably reset your password using your email account, as well. We see a lot of compromises where an email account is compromised, and then that's used to take over other accounts.

But even if we ignore email addresses, there are at least four completely different types of 2FA that are currently in use these days. And that's a lot. And they've each got very different trade-offs, both from a security and a usability perspective. So we're going to need to go through them one at a time to understand each of the trade-offs and what sorts of compromises you're making.

So let's travel back in time and understand how we got to this point. How did we end up with four different types of 2FA and why was each one created? Now, as a disclaimer, you might notice I'm pretty young. I can be fairly hard to find information about this online. So if I get any of this wrong, I would love to know. Come up and tell me afterwards or send me an email.

So we're going to go back to the early 2000s. It's 20 years ago now, so just to remind you, denim, it's all the rage. Everyone's got a Nokia 6610. And if you work in a bank, you have an RSA secure ID token. This token was really the first broadly available 2FA device. It was launched in 1986, and by 2003 70% of the 2FA devices on the market were of this type. But it was really only available to bank employees until about the mid-2000s.

And that's because of this graph. This is the number of data breaches per year over the last 15 years. You might notice that it actually starts in 2005, and that's because there's no information on this prior to 2005. No one was collecting data on how many data breaches there were per year. Because really, the first large scale data breach, first prosecution for it, actually, only occurred in 2004. And it was an AOL engineer who intentionally leaked 92 million customer email addresses to a marketer.

There really just wasn't a large consumer need or demand for 2FA because there weren't any of these sets of credentials online to use against other services. So that meant that the 2FA method that we were using, it could be pretty high barrier and it could have a higher cost. And that's where those secure ID tokens came in.

There's a six digit code that's displayed on the screen. When you want to log in, you enter your password, you enter the code, and you're all good. Banks actually started off using them because banks really like to use pins as passwords. And pins are only made up of digits, and they're usually pretty short. So they're really easy to shoulder surf or guess.

The way the tokens work is they've got a secret baked into them. And then they also have a clock. There's a magical algorithm that happens, and we end up with a six digit code. The server also knows the secret for each user and it also knows the current time, so it's able to figure out what the six digit code being shown on the token at any point in time actually is and compare them. If you can get the right code, you're let in.

And banks really loved them. Because it meant that even if they use those pins were like, 1, 2, 3, 4, or 9, 8, 7, 6, that the user's account couldn't be compromised. There was some drawbacks to the whole thing. Users tended to lose their tokens and sometimes the clocks would get out of sync. But it mostly wasn't a big issue because these were people who worked for a bank. They had an IT team dedicated to helping them get back into their accounts.

But there's another really interesting way to cut that data I just showed you. And that's if we cut it by industry. And what you'll see here is the purpley, bluey kind of colored line on screen. Actually looks about the same, even though we've done some dividing and put things into different categories. Now that line is showing us the number of breaches occurring at private companies. So not health care, finance, the education industry, that sort of area.

And that's because back in the 2000s, really almost all breaches were actually identity theft. So a lot of the numbers that we're seeing here, the theft of SSNs and addresses, maybe medical records. But they're not actually compromised as a password databases. That large spike that we start to see starting in maybe 2012, that's actually leaks of password databases causing substantial issues in that attackers can now use them.

The very first instance of this that really got a lot of publicity was the rockyou data breach. Rockyou is a company that actually let you buy widgets for your WordPress site. Unfortunately, they were hacked. Somewhat more unfortunately, they've chosen to store their email addresses and passwords in plain text. And someone decided that wouldn't it be a great idea if we made a text file of all of these and put it on the internet for everyone to see.

And this was really great if you were an attacker, right? Rather than trying to have to brute force passwords where it was difficult because sites would often write limit, now you just look up what someone's password was and use it on another service. In September 2010 and February 2011, Google actually chose to launch support for 2FA for Google Apps and Gmail users. And it's really hard given the timing not to see this as a response to this breach.

But Google had two problems that the banks did not have. First of all, Google wanted to incentivize their users. They're not a bank. They don't employ these people. They can't force them to use a method of authentication that they're not happy with. And if they charge people for it, it's not very incentivizing.

But the other problem that Google had is that they're a global company. A bank might employ people and maybe a couple of different areas of the world, but generally, it's a fairly small number. Google has users all over the planet. They really can't afford to ship these devices out for free everywhere.

And so they decided to build a software version of those physical tokens, and they called it Google Authenticator. It looks a bit like this. You can still get it on the App Store. I still use it. And it's really exactly the same idea is that physical key. You have a six digit code. It changes every 30 seconds. Rather than having it hard-coded into the device, when you turn on 2FA you scan it with a QR code, and the secret gets stored on your phone. And it works pretty great.

Now, it would be really great if when you moved to a new phone that secret moved, too, the same way all your other data does. And that's how Google wrote this implementation originally. But in 2011, iCloud launched and Apple thought it be fantastic if rather than having to back your phone up to your computer, you could back it up to the Cloud. So people did.

But now you had this single point of weakness. Because if I could compromise your iCloud account, I could restore a backup of your phone to my own phone. Suddenly, I'd have access to your email account via the tokens on there. And also access to your 2FA codes. So suddenly, it's not really a second factor anymore. You have this single point which you can compromise.

And for a long time, Apple didn't support 2FA and iCloud, so we did see a lot of compromises that looked like this. You might remember back in 2014, there was a big deal about like celebrities photos leaking and maybe people thought iCloud had been hacked. No, it was just that people shared passwords between that service and other ones.

Similarly, when Sony was hacked by North Korea, that was a phishing attack that targeted iCloud accounts. Because again, they had so much sensitive information. But Google's other problem was that back in 2010, feature phones were much more common. Not every user had a smartphone. And so it wasn't really enough to say just go download this app from the app store. So they did something pretty novel. They added support for SMS to FA.

And the principle here was basically like, what if rather than having a secret that's stored on every phone that we use to generate a six digit code, we just come up with a random one and send it to a user and see if they can tell us what it was? And this meant that you could receive a text message or a call to a standard landline phone, phones that existed obviously for decades at this point, and this was really quite novel at the time to realize just how much it was.

When Google announced the service, they said it had a 15 minute SLA for SMS delivery. Can you imagine waiting 15 minutes to log into an account? Our users would be so angry at us these days. But it caught on really quickly. Facebook added support for SMS 2FA within a couple of months, as did WordPress and a bunch of other different services.

And it worked really well for the average user. Their password might be leaked, maybe the chat up between sites, but the attacker wouldn't control their phone. And so one of these forms of 2FA would protect them. Unfortunately, compromising accounts is a very lucrative market. I think as many companies that deal with fraud would tell you it can be quite difficult to tell the difference between a good account that's suddenly gone bad, and a good account that's just traveled overseas.

It's especially difficult compared to maybe telling whether or not account-- an account was malicious from the very beginning. And so attack has started to iterate. If they can make money, they'll do it. So let's take a step back and pretend that you're a high value target.

Back in 2011, this might have meant that you were a journalist who is maybe reporting against a regime that doesn't have freedom of the press. Maybe you were a whistle-blower for government corruption. These days you're generally a lot less exciting. You're just doing a lot of cryptocurrency.

There's a flaw with SMS 2FA. You've probably heard of it. And it's essentially the issue that it's not a message to your phone. It's a message to your phone number. But people lose those and they move telecommunication companies. And when they do so, Telcos don't want them to get stuck. So if I call up your telco and I say, I'm you, here's my full name. Here's my address. Here's my date of birth. None of those pieces of information are necessarily difficult to come by, but often they're enough to move your number to a different Sim. Especially, if you're very upset and you say that you've lost your phone and you desperately need to get back into it.

And when this happens, your Sim's going to stop working. But it's easy to chalk that up for a couple of hours, at least, due to like a network issue. And during that time, an attacker will be able to receive all of those 2FA codes. This isn't a big problem for the average user. But if you're a high value user, if you've got a lot of cryptocurrency, or you're doing things that maybe you don't want the government to know about, this is a really big issue for you.

So companies said, OK, what if I could send an SMS but to the actual phone, rather than to the phone number? Let's take the telco out of the picture. Most of these companies already had proprietary apps at this point, so shipping support for them to be used as a method of 2FA was pretty easy. You just send a push notification with a code in it and like, ta-da, it's just like SMS.

You gotta be a little careful about iCloud, but that's really about it. There's only a standard for what these apps look like. So it's a little difficult for you-- for me to show you exactly what you might see. This is what Duo looks like. It's fundamentally just a yes, no button. And you're hoping that the user is actually checking that they just logged in and that the location matches.

Some of them, like Microsoft authenticator add a little bit more friction. They ask you to check that the number on screen matches the one online. That's designed to make the user slow down and really think about what they're doing. And others function kind of the same way as SMS. It's just a push notification.

But these forms of 2FA, they suffer from a problem that's plagued the internet since the beginning of its existence, which is that humans are fallible. It's really difficult to tell the difference between a legitimate site and a fake one. The one on the screenshot on the left is the real GitHub login flow, and the one on the right is a phishing page. They're actually both hosted on domains that are owned by GitHub. The one on the right is a GitHub page's site.

And there's nothing about 2FA, any of the forms we've talked about so far that prevents those credentials from being phished by an attacker who can use them quickly. There are actually tools online that let you do this trivially. There's a piece of software called Evilgenx that will proxy all connections to your site off to the real site and collect credentials for you.

If you do this on the dark web, maybe you're not as technically sophisticated. Google and UC San Diego receive-- released a paper a couple of months ago that actually showed it only costs $1 to $500 to do this for any individual user, generally. So then we came up with this final form of 2FA. We were like, OK, what if we could remove the human from the loop?

We wanted it to be the case that even if a user tried to give their credentials away, they couldn't do so. From a user experience perspective, the way these keys work is you have a device on your keychain, you plug it into your laptop, or maybe it lives in your laptop the whole time. When you want to authenticate, you tap the gold button. That's it. It's very simple. It's very easy to describe.

On the back end, it works very similarly to those time based keys that we saw. There's a secret. It's hard coded on the key. Rather than having a clock, it's got a nonce, so this is the service sends down some random value and the token response to it. But most importantly, one of the other inputs to this algorithm is the domain of the page you're currently on.

This means that even if I convince you that is and I managed to get you to tap your key, you're going to get a response that isn't valid on the legitimate servers, because the domain names are going to differ. And that was really cool. This is actually the only form of 2FA we know of that's immune to this phishing attack.

So OK, those are the forms of 2FA that are currently on the market. But you might notice something pretty interesting, which is that we've actually come full circle when it comes to cost. A decade ago we tried really hard to make it that you didn't need specialized hardware for 2FA. We've improved some of the security guarantees in the interim. We've said now it's a little bit harder to be phished, and maybe it's a bit cheaper. It's only $20 for one of those Yubico YubiKeys.

But it's still not feasible for the vast majority of users. A, the cost is generally prohibitive for most people. But also this is a really US centric view of the world. I actually live in Sydney. Australia wasn't up on the slides earlier, which was a little disappointing to me, but I've tried to buy security keys in Australia. They're actually only two authorized resellers of Yubico keys in Sydney. Yubico is obviously not the only company that sells these keys, but it is generally the largest.

And I've tried to buy them. And it's so difficult that I actually decided to wait until I came to the US this trip to do so. Even if I wanted my friends and family to be using these keys, I would have to hand carry them back from the US for them. And even if I did that, they would suffer from the same problems that they had when they were using a time based one. If they lose that token, they're kind of screwed. There's no way to get back into their account short of talking to the service provider.

If you lose your phone, you can call someone to be let back in. That's a security issue. But it's also a usability solution for most users. And this is something we really don't talk about enough when we consider 2FA. We don't talk about the usability of it for the user, rather than for the attacker.

If an attacker were to target our site and take it down such that users couldn't log in, there'd be a [INAUDIBLE] so it would be considered a security issue. But when we talk about 2FA and we lock our own users out, we think that's totally acceptable, which I think is a little bit weird. And I think this really gets to the human part of 2FA, which is that it's not worth securing a service that you don't actually use because it's too inconvenient or it's too painful.

And you just get really frustrated by 2FA a lot of the time. I've seen it happen with my own friends and family, and with users of my company's software. So let's talk about the most common reason that people get locked out. And when I say locked out, I mean, the only way for you to get back into your account is to call the service provider or email the service provider and do a complete identity verification flow in order to get back in.

So we're going to see those same four methods we saw before. And I've already talked to you about how maybe you could lose your token. If you lose this time based token, you're going to have to talk to maybe the IT team or some other way of getting back in. But remember that iCloud issue where if you backed your codes up to iCloud your account could get hacked? Google solved that. They just stopped backing those codes up to the Cloud, which, yeah, fixes the security issue. But now when September rolls around and you get a new iPhone and you restore from a backup, if you wipe your old one and give it to a friend, how are you going to get back into your accounts in say 30 days time?

For SMS we don't have that problem, but instead we've got the issue that maybe you're traveling internationally and you don't have international roaming. Again, how are you supposed to get into your account if you can't receive a text message? For those mobile apps, again, you've got the same like lost or maybe it's September, if they've configured things correctly. Or maybe they've decided that they don't care about the iCloud compromise and so maybe September isn't an issue.

And finally, again, security keys have the lost problem. They've also got the problem that it's a USB device. It's a little bit difficult to plug it into a phone. Yubico is actually working on this. They've got a key that's in beta that allows you to plug it into an iPhone. But neither Android nor iOS natively support the protocol of this, the security keys speak. And that means that you get very different security guarantees on a phone to what you get in a browser.

That means that there's really not a perfect solution here, right? We don't have a method of 2FA that's somehow usable but also secure. And that's why there's this huge proliferation of different types that you can use. We really need a way of thinking about this, right? Like what's the right trade-off for us and our friends and maybe our users?

And it can seem a little abstract at times. So I want to make this a bit more concrete. Let's imagine a friend pays you back at work and you need to deposit the money at a bank. Would you feel comfortable carrying that cash to the bank? For me the answer is yes. I work in a fairly safe area of town. The bank is also in a fairly safe area of town. Carrying a small amount of cash, I'd feel pretty comfortable about that.

I have a friend who really likes skydiving, though, and he buys a lot of parachutes online from Craigslist. This means he ends up with thousands of dollars of cash at a time that he needs to take from work to the bank. Does that kind of change the way you think about the problem if you're carrying a lot of money? Probably.

If you're carrying the money in a literal bag with like a dollar sign written on the side of it, probably also changes the way you think about the problem. Just like if you were doing it at night, or if you were going through an area that wasn't safe and maybe was known for muggings. This process of thinking about the different risks that exist to us is something we do every single day of our lives. But in security, we give it a really fancy name because we're like that.

We call it threat modeling. So we say what are the different threats that exist, and how do we mitigate them or try and work around them? If you're talking about this case where you're walking to the bank, the thing my friend tries to do is to ask someone to go along with him. If there's two people, you've got a friend with you, it's probably less likely that you'd be mugged. If you were carrying a lot of cash, I don't know, tens of thousands, hundreds of thousands of dollars, maybe you pay for a bodyguard to come along with you and make sure you stay safe.

But in order to think about this, you really need to know one thing, which is, are you a target? This is something we need to be thinking about when we figure out what forms of 2FA are appropriate. Think about the area of the internet you're operating in. Is it an area that's known for its safety? If you're dealing with cryptocurrencies, you're literally carrying around a bag of cash with a dollar sign written on it. Maybe you should consider a more secure form of 2FA.

But really for any account that has a lot of PII or monetary value, you should be thinking about whether or not you're OK with the trade-offs you're making here. If your account isn't super valuable, you probably don't need to have 2FA on it, or you can maybe choose a weaker form of 2FA. And that works great for us, right? We're educated. We can make the right choices here. We know what's appropriate.

But our users and our friends and our family, they're just not going to have the same grasp of the issues. They're not going to understand how they could get locked out. They're not going to understand what sorts of security guarantees they're getting. And the problem gets more complex as a result of this. Because it's not the case that we control everything that these people do. We can make suggestions, but they have their own minds and they'll make their own choices.

And if you have less exposure to those security issues, you're not going to think it's something that could ever affect you. It's like the difference between someone who knows that the bank is in an area known for muggings and someone who doesn't. If your friend asked you, should I walk from work to the bank with a lot of money? And you know that it's an area known for muggings, you need to think about what's a realistic piece of advice that actually be able to action.

And it needs to take into account whether or not the request is reasonable. Yeah, you could tell your friend that they should hire a bodyguard, and that would probably be the safest way for them to get from work to the bank with the money intact. But they're not going to do it, so there's really not that much of a point telling them about it, unless you think that it's something that they might actually reasonably choose to do. Instead, it's probably better to say something that they might actually be able to action. Suggest that they take a friend with them.

This is the concept of harm reduction. It's how to-- thinking about how we can limit the amount of damage that's done given the situation, even if the new thing isn't perfect. Even if it's not true that SMS 2FA could be hacked. It's the take a friend option of the 2FA world. It's much better than having no 2FA at all. And unless you think you're going to reasonably be able to convince someone to use another form of 2FA, you should be recommending SMS 2FA.

So some key takeaways. If you're running a cryptocurrency site, absolutely recommend that your users use security keys or time base 2FA method. They're much more secure than SMS 2FA in a lot of cases. But if you're running a site with fairly little PII, it wouldn't matter if the user's account was compromised, probably recommend SMS 2FA first, because it's much easier to access for user and it's much more likely that they'll be able to turn it on.

And keep in mind the sort of harm reduction principle, right? Any form of 2FA is going to protect them against that password reuse attack much better than no 2FA ever would. So let's go back to that story I told at the beginning. I said that we had a very awkward conversation. My friend apologized, as you would hope. But I was still really angry with him for a long time. We did end up staying friends. We both went on to work in the security industry. He still breaks things. I chose to fix them instead.

But I think most importantly, after he hacked my Facebook account, I started using a password manager and I started turning on 2FA for as many accounts as I could, which is why I have so much experience with all the different things that are available. And I like to think that hopefully, even though I had a very negative experience here, that it means I can help people out in the future and try and stop this from happening to other people in ways that are a lot more damaging.

We don't have time for any questions today, but I've put my Twitter handle up on the screen and most of my email address. And you're obviously welcome to come up and ask me questions afterwards. Otherwise, I hope you're all feeling nice and awake and you're all excited about the rest of the conference talks to come. Thank you.