Home Gotopia Articles State of the Art...

State of the Art of Container Security

Charles Humble & Adrian Mouat discuss container security: why distroless images matter, how Chainguard builds from source for zero CVEs, and lessons from XZ Utils & Shackle-Alert attacks.

Share on:
linkedin facebook
Copied!

About the experts

Charles Humble

Charles Humble ( interviewer )

Freelance Techie, Podcaster, Editor, Author & Consultant

Adrian Mouat

Adrian Mouat ( expert )

Author of 'Using Docker'

Read further

Charles Humble: Hello and welcome to this new mini series for GOTO called State of the Art. This series will explore generative AI, but also other emerging trends in things like platform engineering, new practices, new programming languages. In this particular episode, security. I'm Charles Humble. I have 30 years of experience as a programmer, architect, and CTO, and I'm currently working mainly as a freelance consultant and podcaster. I'm joined by Adrian Mouat. Adrian is a DevRel engineer at Chainguard, a company who makes secure container images. He also wrote Using Docker, which was one of the very first books on Docker published by O'Reilly. He's chief scientist at Container Solutions, where he leads their research projects, and that's also where he and I met for the first time. Welcome to the show. Thank you for doing this.

Adrian Mouat: Thank you very much for having me.

Early Adoption of Containers

Charles Humble: I was struck when I was putting this together that you were very early in spotting the usefulness of containers. What was it that enabled you to see that? What was the thing that you saw in terms of the usefulness of them?

Adrian Mouat: I do have to thank Jamie Dobson, who was one of the founders of Container Solutions, because it was actually him that was first playing with it and showed me. The first thing I used it for was Python. I don't know if you've used much Python, but there's this thing called virtualenv, and it actually has a similar role to containers. It gives you a virtualized environment where you can have different dependencies. But I always hated virtualenv for various reasons. I immediately started using Docker for my Python environments and found it much cleaner. The big thing was it's very easy to give to somebody else. I can put my stuff into a Docker container and somebody else could build and use it, and I wouldn't have to worry about their version of Python and so on.

The Problem of Outdated Packages

Charles Humble: It's been an issue with containers for a long time that the packages can be really slow to update. Can you talk about that? Why does that happen and why is it a problem?

Adrian Mouat: That's a very good question. When Docker started, they created something called Docker Hub where they stored all their images and anybody could upload images. It quite quickly turned out that you generally want the same sort of images to build on top of. We had these base images for the Linux distributions—Debian base, Ubuntu base, Alpine base, and so on.

The problem is, for a base image to get updated, we need the packages in the operating system to be updated, which means Debian, Alpine, etc., has to do some work to update the packages and release those packages. Then we need Docker or whoever is responsible for building the base image to release a new version of the base image with those updated packages. Until both those things happen, you can have a base image that's out of date.

I'd actually go further and say that it's really just the way we work has changed. We used to have these long-lived virtual machines that we installed and tried to keep up to date over time. That's no longer the case. Now we have smaller containers that we update all the time—well, they're replaced all the time. So it's a different world, but most of our Linux distributions are still set up for the older world of VMs, if not actual hardware.

Charles Humble: There's a similar parallel with Java and the JVM where the JVM was designed to be this very long-running process, super slow to start up, and then it sits there forever. Then microservices happened and the whole nature of how code gets deployed is different. We had to retool the JVM to make it work in that space.

Adrian Mouat: I should explain why it's a problem. You end up with outdated software in your containers or VMs. In some cases that will be a security risk—you'll have old versions of software with potential vulnerabilities in them. Also, you're not using new features.

Charles Humble: It's very easy to forget that when you're working with traditional enterprise and you've got lots of applications running, just having operations that update things in real time can be a real problem for a lot of companies.

Adrian Mouat: It's a different way of working. I think it's very important for organizations to get their heads around all the systems and processes around this idea of keeping up to date as much as they can. But we're definitely not there at the minute.

Understanding Scanners and Vulnerabilities

Charles Humble: I wanted to talk about scanners because I think it's important context for this discussion. Lots of companies have invested in tools like Snyk or Docker scan. What I've seen is there's an issue where you run the scanner and it finds a layer of vulnerabilities. But for a lot of organizations, it's not really possible to address all of these. I'm not trying to criticize the tools—I think they do a useful job—but I wondered if you agree with that assertion and if you could contextualize it a bit for us.

Adrian Mouat: That's absolutely a problem. That's why Chainguard is a successful company. What a scanner will do is it will create a map or index of all the software that it can find in that image. The better scanners will get better indexes and find more things in an image. At the base level, what they will do is go and ask the package manager—your APT or your YUM or your APK—about all this software installed via the package manager and the versions.

That's not everything. Better scanners will also look at binaries. With binaries, you can also investigate which versions of libraries are linked in there and so on. There are things like NPM and PyPI where you can also ask those systems for a list of packages. So you already get the idea that some scanners are better than others.

Once they've got a list of all the software, they will compare it to a database of known vulnerabilities. The primary one is NVD. GitHub also has one—GitHub Security Advisories—and then all the distributions have their own advisories as well. By cross-referencing those two, you get a list of the vulnerabilities that are potentially in that image.

The problem is, especially with the older scanners, all they did was dump out a list and you're left thinking, 'What do I do with this?' Some of the better scanners will tell you, 'This vulnerability was added in this layer of a Docker image,' so you can see if it's in a layer that you have control over. If it's in your own application code, then you can go and fix it. That's on you to update to a library or use something else. But if it's in the base image that you pulled from the Docker Hub and you've got the latest version, what can you do? There's almost nothing.

I'm sure it still goes on—some companies have security departments that if there's a vulnerability, especially a high or critical one, they will send it back to the developers and say, 'Tell me why this vulnerability is there.' They have to investigate it and come up with a reason as to why it's there and also why it doesn't affect the application.

One of the most annoying things about CVEs and vulnerabilities is a majority of them will not affect the application. Very few vulnerabilities are going to be exploitable in the wild. So we've created this industry with all this work, but it's a very low signal-to-noise ratio.

Charles Humble: Given that, what's the relevance of reducing image size in that context?

Adrian Mouat: One of the best things you can do is reduce the size of images. This gets into ideas of distroless that we'll come onto. Basically, the less software you have in an image, the less software there is to have vulnerabilities and CVEs, and also the less there is to keep updated.

Google Distroless and the Birth of Chainguard

Charles Humble: Can you talk about the Google Distroless project and how that led to the creation of Chainguard?

Adrian Mouat: Let me explain what distroless is first. I'm not entirely sure how many people were involved in the creation of Google Distroless, but certainly Dan Lorenc and Matt Moore, who are the CEO and CTO of Chainguard, were involved while still working at Google.

The idea was: let's take the Debian base image and just pull everything out of it that we don't need for the typical Linux application. This is quite an interesting thought. Naively you might think, 'Well, I program in Go—I'll just have a Go binary that's statically linked and put that into an image. I don't need anything. I'll use a scratch image with absolutely nothing in it.'

That is true for some applications. But what you find is for very many applications, it's not enough. The first thing you'll come across is TLS. Nearly all applications need to make an HTTP call, and for that you're going to need certificates. So now you have to add certificates into your image. Then you start finding things like you have to have a temp directory because you can make calls in Linux libraries that require a temp directory. Then you start realizing you also need stuff under /etc and a couple of other small things, and time zones come in as well.

They made an image with those things in it. That's enough to run the vast majority of Linux applications. By having this really small—a couple of megabytes—image with absolutely minimal content, there are no CVEs to be found in it as long as we keep it up to date. That was fantastic. They also created slimmed-down versions of Python and Java images at the same time.

We went from requiring a 50-megabyte or 200-megabyte image for running an application to requiring a few megabytes, which was an absolutely huge difference, not just in terms of security and CVEs, but also in what we're transferring and being quick to start up and so on.

At Chainguard, Dan Lorenc and Matt Moore took that to the next level. We said, 'Let's create these distroless-based images for all the software out there.' That's the premise of Chainguard.

Charles Humble: If I run a scanner on a Chainguard image, what should I see?

Adrian Mouat: Hopefully you'll see zero CVEs. There are a couple of caveats there. There will be—I'm not sure what you could find today—but you will be able to find some images with one or two CVEs that we're either currently investigating or there's no known fix for.

One thing to be aware of is if you scan an old image—let's say I pull an image from Chainguard's registry today, wait a month, and then scan it—you will find CVEs because all the software in that is now slightly out of date, and there will be one or two CVEs found somewhere for something that's been published in the meantime.

The interesting thing is, if you compare that to another image that had very few CVEs but was a larger image, what you'd find is that larger image will have much more CVEs, even if they started with the same number or very close to zero. If you left it for a month or two, because there's more software in there, you'll definitely see a higher CVE count compared to the smaller image later on.

Charles Humble: I've seen occasionally I'll get a CVE that's detected by the scanner, but then Chainguard says it's essentially not affected. What's going on there?

Adrian Mouat: Most scanners should now pick up the Chainguard security feed, so it shouldn't actually tell you that—it should just be removed from the list. But if there's a CVE you're interested in, on images.chainguard.dev, you'll find security advisories for all the images. There's a separate security page as well. You can look up any CVE and you'll see the status of it—not affected, fixed, that kind of thing.

'Not affected' means, for example, one thing that happens relatively commonly is there's a Windows version of the software and obviously containers are for the Linux version. So there might be a CVE reported against the Windows version, but the database isn't clear enough, and the scanner also marks it against the Linux version. We will issue a 'not affected' saying, 'We're not running this version.'

Wolfi and Building from Source

Charles Humble: You also have Wolfi. What is Wolfi and why do you have it?

Adrian Mouat: That's the underlying operating system for Chainguard. The way Chainguard works is we build everything from source. There are arguments around this that you can go and look at from competitors, but honestly, I think this is the only real way to fix software and produce secure packages—to build them from source.

We create our own Linux distribution to build absolutely everything from source down to the glibc level. We're not trying to patch an existing binary or something. We are actually fixing things at the source level and publishing them. That also allows us to be very granular. We can figure out how to split a package up so that you're not pulling in modules that you don't need or whatever, because package size is sometimes a problem. I believe for Debian, they typically have larger packages, whereas our APK packages would typically be a lot smaller.

To solve this problem of CVEs, we have to go back to the really low levels of Linux and create our own Linux distribution and our own tools to turn those Wolfi APKs into container images. We're not using Docker build. We're using a tool called apko that's open source, and you can try it out. That will assemble APKs into a container image, which is pretty nice. It's purely because we have to build everything from source to overcome these security things, which is why we created our own Linux distribution called Wolfi.

Charles Humble: If I'm running a Chainguard image and I need to update it, what's the process for doing that?

Adrian Mouat: You start again. You can, you know—we have Chainguard VMs now, which is a follow-on product. We have virtual machine images that are the same approach as container images. But what we would really advise is not to run APK update this package, but to replace it with the new version. It's just a level of immutability.

We used to do this—I'm sure you remember configuration management like Puppet and Chef and so on. That was a mess. I remember using Ansible and Puppet and Chef and it was just pain. The second you used Docker and you could just replace a container instead of trying to make sure it's the same version across 13 containers that should be doing the same thing, and they all had the same version of packages, but they all weren't quite the same and you'd never work out why—Docker just got rid of that problem. That configuration management problem really just got wiped out. You replace it, and then you're absolutely sure that all these containers are the same. That was a momentous change. That's what you do. You don't update. You replace.

Charles Humble: Your recommendation is generally just start again each time rather than trying to update stuff.

Adrian Mouat: Absolutely. There might be some exceptions like data or config. But there's also an idea—the company was founded by Google engineers—so at Google they apparently have this idea of a build horizon. Say you've got a microservice running at Google and it's been running for weeks, maybe two or three weeks. The code hasn't changed, so there's not really any need in terms of features or compatibility to replace that image or container or service.

But they will pull that service down and replace it. The reason being that although that code hasn't changed, there may well be libraries inside it—there may well be utilities or operating system stuff there that's slightly out of date. So they replace it and put a new one up. That keeps things secure. You're not running containers on a year-old image and suddenly finding out you have critical vulnerabilities inside.

Charles Humble: It's quite funny to me because it's a very different way from how a person might traditionally think about this.

Adrian Mouat: We're still going through this change for the industry. I've never worked at Google, but I do get the impression that people like Google—possibly some of the other big four or five tech companies—really were ahead of the curve. One thing you certainly see is companies being born from people that have left those companies, and they missed the internal tooling, so they built a company that's a recreation of the internal tooling. Prometheus might be an example of that.

Charles Humble: The whole observability thing—Charity Majors started Honeycomb. She came from Facebook and was basically creating Honeycomb to replicate the tooling she had available from an observability point of view. We see this a lot. That's a really healthy thing, I think.

Software Bill of Materials (SBOM) and Attestations

Charles Humble: Can you talk a bit about SBOMs? Chainguard can automatically create an SBOM. Can you explain how that happens? How does the SBOM get created?

Adrian Mouat: Again, we have an advantage because we build everything from source. We know what's going into images. I should say an SBOM is a Software Bill of Materials. It basically describes everything inside your image and the versions of it and so on. It came out before containers and can be used to describe any sort of software.

If you go and look at an SBOM, there are too many standards—SPDX and CycloneDX—and it's a big document. It's not the nicest thing to read. They get very verbose, but at its fundamental level, all they're doing is trying to give you a list of all the software inside the image and all its versions.

That does get more complicated and difficult when you get down to transitive dependencies. If I install a package, but that package installs other stuff, can you pick that up? At Chainguard, we create SBOMs for all our packages, and that gets put into the final image. That also means that we're creating our SBOMs at build time, whereas you might find that a lot of other people build them after the fact.

If you look at tools like Syft—Syft is part of Grype, so Grype is from Anchore—it's a vulnerability scanner we talked about. What Grype will actually do is run Syft to create what's effectively an SBOM. I think it has an internal format that can be modified to be an actual SBOM format, but it will create effectively the index of all the software in the image, which gets passed to the scanner. But that's created after the fact, after the software is built. We have the advantage of being able to create this when we build the software.

Charles Humble: I was trying to think of an analogy. It's a bit like the food industry or something like that, where if you buy a ready meal, you have a list of ingredients, or if you're buying a piece of meat, you should have a label that tracks it back to the farm where it started and all of that. It's the same idea at some level—what are all the things that make this up?

Adrian Mouat: Exactly. In the food industry, if you realize this ingredient is going out and it's poisonous or something like that, you need to figure out where it's all been and who's all got that. It's the same thing, or the idea is similar in the software industry. Can we figure out all the instances where this CVE or this version of the software that's potentially vulnerable exists?

We're not really there yet, to be quite honest. If you go to any enterprise, companies are not going to be able to tell you all the versions of all the software they run because there's so much and it's in all different formats, different places. Hopefully we are getting there. That's certainly something the industry is working on.

But until we get there, SBOMs, I have to say, have limited use. Okay, I have a list of the software in this container. Does that actually help me? Maybe, maybe not. In the future, with more tooling, and when we have greater coverage of our platforms and systems, and when you have better tooling that sits on top of SBOMs, then I think it will be useful.

Charles Humble: There is some skepticism about SBOMs in the industry. I think you're alluding to it a bit. It's only useful if everybody is doing it.

Adrian Mouat: Effectively. The other thing is you get a lot of third-party software. If I'm using third-party software in my platform or systems and I also need to know all the versions of the software in there, otherwise I could be vulnerable to something and not realize it. That's not to say it isn't useful. I shouldn't imply it isn't useful. It certainly can be, but I think its usefulness is limited until we get better coverage.

Charles Humble: We're talking about SBOMs. There's this term that we use, which is attestation. Can you talk about that? What are attestations? How do they work?

Adrian Mouat: Attestations are more than SBOMs and so on. An SBOM is an attestation. It's basically the idea that I can assert and prove that something happened or something is something. We have attestations attached to all our container images. The signature itself, I think, is an example of an attestation. Our images are signed, and that signature is an attestation to the image.

But you can also have attestations like: this image was built on this build platform, this image passed these tests on this date, for example, and things like that. Attestations can be very useful. They allow you to prove things about artifacts, where they came from. Provenance is another big word that we allude to quite a lot in security.

Charles Humble: Maybe to make this a bit more concrete. Everyone knows about Log4j. This is a fairly old story at this point, but it was a zero-day exploit that was discovered in Log4j, a very popular Java logging library. There were a whole bunch of different versions—I think it was 2.14.1 or something—but all of those versions were affected. It was patched pretty quickly by the Log4j team. But there was an issue, and that issue is Log4j was everywhere. It was the de facto standard for logging in Java, so every Java app seemed to be running it. How would having SBOMs help with something like that situation?

Adrian Mouat: Potentially. If you think of a smart TV, Log4j is in a whole bunch of things. Smart TVs are very difficult to update. There are still vulnerable IoT devices with Log4j exploits out there. Potentially, yes, it could definitely help. Imagine you could see the SBOM for your TV and see if it's vulnerable. I guess you wouldn't do that yourself. You'd need all these TVs to phone home and automatically update or whatever. But it's getting to things like that. Can I tell if my devices are vulnerable as well as artifacts?

Defense in Depth and Best Practices

Charles Humble: I think where it gets easier is this idea of defense in depth, that you can't rely on one thing. Maybe you can talk about that advice and how that would work.

Adrian Mouat: Let's talk about defense in depth. That's the idea that we don't rely on any one layer for security. Twenty years ago, when I was working for the university, I remember being involved in a security incident. One of our sites got hacked for a European project. The hosting company—they shall remain nameless—but they were a partner, and they denied responsibility for a long, long time. They said they were basically secure because they had a firewall. The firewall only permitted port 80. That was true. The hackers hadn't gone through SSH.

But they were running an ancient version of PHP, so the attackers immediately got a shell through PHP. It was quite an interesting attack, and it took us a while to notice. We only noticed because if you Googled for our sites, the Google index would show you all this spam—sell cars and more nefarious things. We're like, 'Where's this coming from?' But when you went to the site, it was fine.

What they'd done is they put a lot of redirects in. So if you made a Google request, you got one set of results. If you were a normal visitor, you got the normal site. That's why it took a little while to notice. But it was fascinating.

So defense in depth is not being that daft and saying, 'Well, I've got a firewall. We're secure.' It's about saying, 'Okay, you've got a firewall, but we still need to keep things up to date. We still want to keep things contained so that if somebody breaks into this part, they can't get into that part of our system.' Never just relying on one level of security.

Our CTO likes to talk about the Swiss cheese model, which is very similar. You think of a slice of Swiss cheese with the holes in it, and those holes are vulnerabilities. Any slice of Swiss cheese is quite vulnerable. But if you stack up a whole bunch of slices of Swiss cheese from different cuts of cheese, then you quickly get to a point where there's no single hole through all the cheese. That's a very similar analogy.

I guess SBOMs are actually less about security in the sense that defense in depth is—they're more about being able to figure out what has happened later. SBOMs certainly have a place in security, but it's less in the preventing an attack and more in the figuring out if you have had something exploitable at some point.

Charles Humble: Obviously having a minimal container is also helpful. But what are some of the other things that you would recommend people consider? What are some of the standard things that you've seen?

Adrian Mouat: Immutability is actually a big one. We talked about it earlier—we no longer just update things and prefer to replace things. Immutability can be great. You can say things like, 'This file system shouldn't be writable,' and that can limit what an attacker can do.

There are also basic things that people still don't do, like signing container images, which I used to complain about a lot. Signing the images can help you verify that you are getting the correct image from the supplier, that it's not been tampered with. Nobody's changed it unexpectedly. But people still don't do that even though we have tools like Sigstore. Go and check out Sigstore, which will let you easily sign an image. You don't have to worry about private keys as long as you have an OIDC identity. If you're in GitHub, you can use the token in the GitHub action and almost automatically sign images.

There are things like not running as root. Definitely do not run your containers—the processes in your containers—as a root user. That's really the first thing a lot of us look at, and then capabilities and RBAC would be good ones. Definitely the size of images. The smaller the image, the fewer tools there are for an attacker to exploit.

One thing we talk about sometimes is 'living off the land' attacks. That's where an attacker gets into a VM or container and they use tools that already exist—things like sed or awk or curl—to further exploit and get higher levels of privilege. If you don't have those tools in the image, the attackers can't use them to further their privileges or whatever. There's plenty more to go on with.

Charles Humble: Oh, yeah.

Adrian Mouat: Another good one worth mentioning because it's quite low-hanging fruit is TruffleHog. You get tools like TruffleHog that you can run on your images, your containers, or whatever, and it will tell you if there are any secrets in there. Sometimes we accidentally check in a secret to Git or we add a secret into a container image and we didn't mean to, or we haven't realized we've done it. That will find those and warn you. But attackers can also use those. We actually saw that in the Shai-Hulud attack recently.

Charles Humble: Can you talk a bit about that? Because that's really interesting.

Adrian Mouat: Shai-Hulud was—it's actually a second wave right now that was announced today. I don't know when this podcast will go out, but we're talking about late January 2025. Shai-Hulud was so named because the worm in Dune—the sandworms in Dune—were called Shai-Hulud.

This was an attack against the NPM ecosystem. What it did was they managed to get the first victim through some sort of spear-phishing attack. They got their credentials and changed a package and uploaded the malicious version of the package to NPM. Then anybody that downloaded that version of the package also got compromised. If they had access to any NPM packages, those were uploaded with malicious versions. That's why it was a worm, because it self-propagated. Every time it found a victim, it would infect more people. That's the definition of a computer worm, which is where the Shackle-Alert name came from.

One of the interesting things was it ran things like TruffleHog to try and find secrets inside your laptop. I think it also affected GitHub projects. It would search for these secrets so the attackers could use them later and further compromise people.

Charles Humble: That system's great. It was really, really interesting. I'm amazed how often people leak credentials and retired credentials and that sort of thing. Stuff we would think would be redacted, but it's not.

Adrian Mouat: I wish I'd said that when I was talking about security earlier. I would say the number one thing that we talk about at Chainguard is have short-lived credentials or avoid credentials if possible, because a lot of the time you can use things like OIDC now. You don't need to retain credentials. But if you do, don't have them a year long—have them like an hour long. That's quite a big one.

The XZ Utils Backdoor

Charles Humble: I can't let you go without talking about XZ Utils or however we pronounce it, because it's such an extraordinary story. For people who aren't familiar, there's a really good episode of Oxide and Friends, which is another podcast, where I'll include the person who actually found it. I've got a link to that in the show notes because it's superb.

But it was an incredibly nefarious attack in the CI/CD pipeline. The very short version of it is this was a campaign to insert a backdoor into the XZ Utils project. It was about three years of effort, from November 2021 to February 2024. The user going by the name of Jia Tan gained access by basically getting to a position of trust within the project. I think we don't know if it's one person or several people under one identity, but either way, somebody or several somebodies were involved.

It's an extraordinary story. It's an astonishing amount of effort. But I guess my question for you would be: would any of the things that Chainguard does have helped in that kind of scenario?

Adrian Mouat: Yes and no. There are a few very interesting things there. The amount of work they went to—I think it's very hard to think of an open source project that would be immune to that, to be perfectly honest, initially a small one, because they did their due diligence, right? This person has been around for a while. They gained the trust by creating PRs that were reasonable. In that sense, that's not something Chainguard can help with.

There are a few interesting things. Part of the exploit was only in the uploaded binary, the released file—it wasn't in the source code. Some of it was in tests and stuff, but part of the exploit was only in the release process script. I think it was only in the tarball. It wasn't in the actual GitHub repo. Can't remember if it used GitHub, but it wasn't in the actual source code.

One thing we try to do at Chainguard is build everything from source. That also goes for all the Shackle-Alert attacks, right? Because the Shackle-Alert attacks, they did an npm publish or whatever the command is to upload a malicious version of the package. That package no longer matches the version in GitHub.

One thing we do at Chainguard with container images is build everything from source and try to avoid using tarballs. We try and do a Git checkout to make sure we get the latest source code and use that and avoid using the tarball, which means we wouldn't have got that specific part of the malicious code.

We actually have Chainguard Libraries now. We've just launched recently and we're working on NPM, which I believe is still in beta—maybe by the time this podcast goes out it won't be. I'm not sure. But in that, what we do is we build everything from source. Any Shackle-Alert attack won't affect us because that only exists on NPM and not in the GitHub source. The attacker would need to affect the GitHub source before we'd pull it in.

I think that's the main place that we would protect against that attack. I should say, although we protect against the NPM version, we don't—XZ Utils is a C library. We don't offer a libraries product for C.

Charles Humble: Got it. Thank you. If listeners want to learn more about Wolfi and Chainguard images and all of that, where should they go? What learning resources do you have?

Adrian Mouat: Obviously there's the Chainguard dev site. What you'll find there is edu.chainguard.dev. That's the academy site. I've been heavily involved there, and the team I work with. We've created a whole bunch of tutorials and help on using Chainguard images, but also distroless images and open source tools. There's a lot there, not just for customers, but also new users and people that want to find out more about distroless images and security.

Charles Humble: That's brilliant again. Thank you very much indeed for taking the time to chat to me on this episode of GOTO's State of the Art.

Adrian Mouat: Thank you very much for having me, Charles.