The Current State of Cyber Security
It’s almost a given that you or your company will be hacked one day. How fast and how you react is the thing that makes the difference. Eleanor Saitta explains the ins and outs of an attack and what you should have in place to surpass it successfully.
Read further
It’s almost a given that you or your company will be hacked one day. How fast and how you react is the thing that makes the difference. Eleanor Saitta explains the ins and outs of an attack and what you should have in place to surpass it successfully.
Intro
Aino Vonge Corry: Hello, and welcome to GOTO Amsterdam 2022 Unscripted interview. And today I have Eleanor Saitta, and I'm looking very much forward to this interview.
Eleanor Saitta: Thank you. It's nice to be here.
Aino Vonge Corry: It's great to have you here. So, I was listening to your talk about security, and what can you do in an organization. You had a lot of tremendously very good advice. You have like small things, and you had big things. I really liked that you could take away what you needed for your organization. I thought that was very helpful.
Eleanor Saitta: I never know who's in the room. So, I try to give a bit of range, so everybody gets something.
Aino Vonge Corry: Definitely. And I think it really worked, at least for me.
Eleanor Saitta: Thank you.
Immutable and ephemeral security
Aino Vonge Corry: So, one of the things that I thought about a lot was that thing about immutable and ephemeral because I hadn't really thought about that. I'm not really into security and such, but that was one thing that I hadn't thought about before. And I thought that was brilliant. Could you put some words on what it means, and why you want it?
Eleanor Saitta: So basically, the idea with having your individual service workers, etc., but really any given VM, container, whatever that's running in your ecosystem, be both immutable and ephemeral, is that this means that the...so immutability means that you are...well, it means a number of things. One thing is that if an attacker gets onto that system, they can't change it, right? I can't reconfigure existing services to expose more vulnerabilities. I can't download attack tools and write them to disk. So, in this case, immutable means no writable disk, and ideally, no other writable config stores anything. So, if you need to change anything in that container, you redeploy the container. So, it's fully baked. It's never gonna change.
Aino Vonge Corry: So, what you're saying is that even if you get access, you can't change anything, you can't run anything?
Eleanor Saitta: Yes. I can't change the images it's running. I have to load stuff into memory during that process. And ideally, you're running in a container, and that only has one executable in it. You don't have any other tools sitting around, etc. So, I haven't given you this is kind of another version of least privilege, right? If it's not code that's required to do the things that the container is supposed to be doing, it shouldn't be accessible to the attacker, so that kind of additional code is a privilege in that context. That's something that's a tool that I can go out and reach and pull in and that kind of thing.
Aino Vonge Corry: Yes.
Eleanor Saitta: If I wanna do it, it's got to be either already in the image, or I've got to like download that code, and patch the binary on the fly, that kind of thing. And you've made the attacker's job much, much more difficult, right? Well, I mean, you've selected any attacker who isn't up for doing actual in-memory exploitation. Depending on the language, and the environment, that's gonna be more or less difficult. So, that's sort of one-half of it.
Then the other half of it is ephemeral. So it depends on what your startup costs for that container, etc. Ideally, in an ideal world, it would take, I don't know, 5 or 10 microseconds to spawn a new instance of a container, right? And so, that kind of pushes us more towards the lambda world, where every single function call is a new container lifetime. That's sort of ideal because that means that okay, so great, that call comes in, and I compromise that container. Okay, well, there's a timer that's running now. And in 15 minutes the timer for that function call max lifetime is gonna expire, and then the container goes away. And if I want to retain, if I'm doing reconnaissance, I have to exfil all my data immediately. And if I can't gather data, keep it around and act locally on that data, because I don't have a data persistence point, because it's immutable, etc. Like, I have to keep re-compromising.
So, compromises are generally relatively noisy. It's fairly rare that you're like, I mean, maybe you get lucky, and you actually just got a one shot, one packet in, and you've got your code running great. That's pretty rare, right? Normally, if you're actually going to get a local compromise, it's going to be okay, I've got a compromise, you know, and that's maybe it's statistical, maybe it's 20 packets, maybe it's 50,000 packets, but then okay, now I need to start injecting some tools, etc. And all of this adds a bunch of traffic noise that adds more logging noise. It makes them more likely to be detected. And so, in order to maintain persistence, you have to compromise a container for a minute, right? Now you're generating a ton of noise. I'm probably just gonna start seeing that on traffic graphs. And I can be like, okay wait, why is this one host sending so much? This doesn't map out.
Recommended talk: Security Styles • Eleanor Saitta • GOTO 2022
Aino Vonge Corry: You're going to get suspicious.
Eleanor Saitta: Yes, exactly. Or just you're more likely to just notice it all.
Aino Vonge Corry: Because it's weird traffic.
Eleanor Saitta: It's weird traffic. It's not expected behavior. And so, like that combination puts you in a place where it's much harder for an adversary to do anything if they get in, and it's much harder for them to stay in. Because that's the goal of the adversary, right? They don't want to just get an initial compromise, they have goals that they're trying to accomplish in the world, whether that's exfiltrating data, encrypting a bunch of stuff, and demanding a ransom, all of these kinds of things. And all of those take time. If I need to go encrypt all of your databases and delete all of your backups, those commands take time to run. The database has to physically read all the tables and these aren't necessarily fast processes. So, I need to stick around for a while.
Aino Vonge Corry: That's interesting. So, you're actually using the physicality of the world so to speak?
Eleanor Saitta: Yes, exactly.
Aino Vonge Corry: To protect yourself.
The least capacity principle
Eleanor Saitta: There's another idea that's kind of related is the talk about its least capacity. So, we're familiar with least privilege where you don't let people do things they don't need to do, but least capacity says that you don't...so let's say you have a database cluster that auto-scales, and I find a nice convenient SQL injection, and you've got a lot of data, and I want all of it. Great, your database cluster will literally auto-scale to let me exfiltrate all of your data. Whereas if you say okay, yes, it auto-scales, but the auto-scaler isn't based on database load, it's based on queries coming in the front-end that are like invalidated application contexts. Then I'm going to start overloading that database, which is great because it's going to fail, and it's going to annoy somebody, and somebody is going to look at it and figure out wait, why is this database overload? Let's look at the queries. I don't recognize this.
Aino Vonge Corry: Something weird is going on.
Eleanor Saitta: It's literally excess capacity in a system is also a form of privilege, because it's something that allows the attacker to do a thing, but you don't need it for anything that the application is intended to do.
Aino Vonge Corry: Yes, I like that, really.
Eleanor Saitta: Again, it's using the materiality of the system against them.
Aino Vonge Corry: Exactly. I was thinking when you gave that talk about ephemeral, I was thinking about, it's a bit like when you want to find a ripe avocado that hasn't gone bad yet. You have that small window.
Eleanor Saitta: Yes. It's about 13 seconds.
Aino Vonge Corry: Yes. Exactly. It's the same with avocado.
Eleanor Saitta: In Finland, where I live, it's about minus three days, where they almost always rot before they ripen, but that's a specifically Finnish problem.
Aino Vonge Corry: But Finland is good in other ways?
Eleanor Saitta: Yes.
A scary security anecdote
Aino Vonge Corry: Yeah. Great. So, I was thinking maybe you could tell us a very scary anecdote about security so that people will need to listen to your talk to avoid getting in that situation.
Eleanor Saitta: So, one of the things that I found working in the security world for a long time, I learned this lesson specifically doing training for NGOs and news organizations who are being targeted by nation states, is that when you scare people with a story, the brain stops forming long term memories. And so, until they get to a point where they're no longer scared, which is basically where they feel like they have agency in the world again, they don't actually process that much of what you're telling them. Which, I mean, there are times when I do fractional CSO work for a bunch of startups. And there are times when I have to explain the risk picture of a company to the board, to the rest of the C-suite, etc., that kind of thing, to let them know that we're going to need to start spending some money, and I know that you don't want to, but here's why. And I try to be realistic in those because we do need to talk about the actual risk structure, but I try to make sure that I'm not actually scaring them. So here's the bad thing, and here's the agency that you have to stop the bad thing from happening.
There are lots of terrifying stories and statistics out there. I don't know if the numbers changed since the last time I looked it up. The average time to detect a compromise is 210 days. And most companies are told by someone else that hey, you're sending us weird traffic, that kind of thing. There's plenty of bad news out there all over the place. And I think that one of the things that it's easy, especially if you're running in a more traditional IT environment, and you see just like this ransomware epidemic, etc., it's easy to get a bit fatalistic about it and be like, well, there isn't really anything I can do. And I mean to a certain extent, that's true. Everybody gets owned eventually. However, that's not the end of the story because what happens after you get compromised is what determines whether or not this is a problem, right? How quickly can you recover? How quickly can you limit access? How quickly do you detect all of these kinds of things? And there's a lot of stuff that you can do both to make it harder to be compromised in the first place, but also to make sure that you can actually recover as a resilient team and get the attacker out, limit the damage, detect all of these kinds of things. I think that both halves of that pie are important. And I think that that's the thing that it's worth telling people is like, yes, you're gonna get owned eventually. You're absolutely gonna get owned eventually. But it's not the end of the world assuming you do the work now.
And I will also say that if you are a startup that's still in a seed round, there are some choices that you should make very, very early on, that can lower your kind of overall risk. But a bunch of this stuff is not your problem yet. You need to actually make sure that you have a product with marketplace fit and all that kind of stuff. Because I do meet founders who are in very early stage, we are two developers, and one of them is staying up sleepless nights thinking about security. I'm like, no, this is not your job right now. Go build a company first, you know.
Aino Vonge Corry: Yes. They end in analysis paralysis about that, and they don't get anything done.
Recommended talk: AWS Cookbook: Recipes for Success on AWS • John Culkin, Mike Zazon & Kesha Williams • GOTO 2022
Eleanor Saitta: They're worried about it, but they don't have any actions to take.
Aino Vonge Corry: Yes.
Eleanor Saitta: This is where it's kind of, you know, my recommendations. Ditch your Windows boxes, ditch Office, for the love of God, write in a type-safe language with automatic memory management. Don't write in C, you know...
Aino Vonge Corry: Help yourself.
Eleanor Saitta: As much as I hate to say it, don't write in Python, because type-safety is your friend. You want to enforce type-safety, or at least use type hinting or any way. But there are some of these kinds of choices that you can make that when you get to the point where you now need to deal with a problem for real, you're gonna be in a better place. The caveat of this, of course, is that you are still even in a very small company responsible for the risks that you put your users into, right? So, if you're taking real user data, if you're running in the world, you need to make sure that you're not overselling yourself, and that you're not putting those actual users at risk. But it's this kind of flip-flop where user risk management matters as soon as you have real users. Risk to the company actually matters much less then. Risk to the company only matters after there's a company to be worth putting at risk.
Aino Vonge Corry: But in the beginning, worry about the users and their data.
Eleanor Saitta: Worry about the users, don't worry about anything else. And then later on, we'll go back and fix the problems, you know, because you're always gonna have some tech debt.
Aino Vonge Corry: I think that's a good way to end this interview, worry about the users.
Eleanor Saitta: Yes.
Aino Vonge Corry: I think that's a very empathetic way to think about it. And thank you very much for...
Eleanor Saitta: Thank you. This has been fun.
Aino Vonge Corry: ...joining me in this interview. And I hope everybody will go online and watch your talk because as you said, everybody will get something out of it. I definitely did.
Eleanor Saitta: I certainly hope so.