Learn Docker in a Month of Lunches
You need to be signed in to add a collection
Bret Fisher & Elton Stoneman on why Docker still matters in 2025 and what's new in Learn Docker in a Month of Lunches 2nd ed. 45% off with code GOTOstoneman3.
Transcript
Intro
Bret Fisher: Elton, tell us who you are, a bit of your background, and why you're the person people should be learning Docker from. You have an origin story with Docker, so I want to make sure people know about that. I am Bret.
Elton Stoneman: My name's Elton. I'm a freelance consultant and trainer — I've been doing that for a long time, freelancing before Docker was even invented. I've always worked in the Microsoft stack, in the .NET world. Way back in 2014, two things were happening at the same time: there was this new stuff around containers, which was very Linux-focused, and at the same time .NET was moving to be cross-platform. I got really interested in the idea that you could take the same stack you'd been using for a decade or more and package it up into a tiny little thing and run it on Linux anywhere — cheaply, scalably, fast, all that sort of stuff.
I was doing a lot of going to conferences and Docker meetups. Remember the days when there was a Docker meetup in every town in the world? I was doing a lot of that with the London team, and then a job came up at Docker starting in developer advocacy — helping people learn about Docker in the early days, what it could do for you. I've always been a consultant at heart, so there was also pre-sales work and working with companies to explain what Docker was all about. I was with Docker for three years, and my role evolved — I ended up working heavily with partners like Microsoft and AWS, seeing what they had coming down the line and trying in some way to marry that with what Docker had coming. It was a great time. When I left Docker, I went back to consulting, and I've been using Docker, Kubernetes, and the cloud every day for other companies ever since.
Bret Fisher: I think that's how we connected — we've been on very similar career paths, both freelance consultants, both with a Windows background before the Linux world took over. There's a small band of us all connected by the world of Linux and Windows merging and how open source just dominated. Docker and Kubernetes are now over ten years old, and yet this book is still absolutely necessary, because it's amazing how many people have yet to discover the wonders of containers.
Bret Fisher: What was the big reason to update Learn Docker in a Month of Lunches? What's changed in the last five years of containers?
Elton Stoneman: The first edition came out in 2020, and this second edition just came out in September 2025. On a philosophical level, Docker is now kind of ubiquitous — which is what everyone was trying to achieve when I worked there. People know what it is. We've gotten over the challenge of explaining it with those terrible diagrams that tried to show the difference between a VM and a container. We can kind of skip past that now.
But the main thing is that the landscape around Docker has changed so much. It's really all Kubernetes now. Every cloud gives you a hosted Kubernetes platform, so you can spin up your application on your local machine using Docker Desktop, run it in Kubernetes on your machine, get it all working, and push it to a managed service in the cloud that scales to thousands of CPU cores within minutes — using the same application model, the same packages in your Docker images, the same model in Kubernetes.
A lot of that was missing from the first edition, because I was still talking about Docker Swarm and running Docker Compose on-premise, and people have largely moved away from that now. There's also been movement back from the cloud when bills get out of hand — I've had a lot of cloud migration projects that turned into the reverse when costs spiralled — and being able to spin up a Kubernetes cluster in your local data center and run the same apps with very few modifications is really powerful. So a lot of the new content is about being able to run the same things anywhere.
Why Docker Fundamentals Still Matter
Bret Fisher: Docker and Kubernetes are not new — they're over a decade old — and yet we still need people coming into the industry understanding the fundamentals. You and I both have that background of managing Windows servers manually, setting up OSes by actually putting DVDs in server drives and installing that way. I used to call myself an IT pro back when I was managing Windows servers. When we first met, the reason neither of us could stop talking about it and teaching it was because of how transformational the idea was: taking apps your teams are making and packaging them into a series of tarballs that are universally downloadable and accessible from every OS, with every dependency correctly installed and configured, in a way that ensures the host environment can never break them.
We both lived in the world of runbooks and manual installation steps, where a specific version of Python had to be installed, and if you updated the OS that might affect the Python version, and suddenly everything was broken. Then this seemingly innocent little open source project showed up and we saw the potential: separating an application and its dependencies from the host operating system so that the two, in theory, should never meet. That's still the goal. And we're over a decade in, and nothing has shown up to improve on it. It's still the best we've got for a consistent industry standard to package, deploy, and update software.
Elton Stoneman: The main thing that's changed is the ability to do this anywhere. Those core components — being able to say in your Dockerfile that you need this version of Python, install this version of this dependency, and have that baked in and portable — have been around since the beginning. But the "anywhere" part has become more and more true. The cloud container platforms let you bring your image, whatever it is, spin up a container in near-zero time, and pay very little because you only pay for the CPU you actually use.
And Kubernetes takes your Docker images and lets you build a whole application model around them — ten of these, fifteen of those, they communicate like this, that component doesn't talk to that one — and that model is also portable. You can take it to any cloud, to a different container platform, or bring it back on-prem.
The explosion in portability across different CPU architectures has also been huge. When ARM was around early on, most people had an Intel machine and were deploying to Intel machines in the cloud — maybe playing with a Raspberry Pi at home. I was on the Docker partner team working with the ARM people, secretly thinking I wasn't quite sure where it was all going. Then all of a sudden AWS offered an ARM processor in the cloud at 60% less cost. And a couple of years later, all the Apple hardware moved to ARM. Being able to have one set of source code, one application manifest, build it for different architectures and operating systems, and ship it wherever you like — that's been the real breakthrough.
Bret Fisher: And still today, across Mac, Windows, and Linux — especially across different Linux distributions with their different package managers — the best we've got for a universal install-and-run utility is still the docker run command. There are all these other little tools out there that are OS-specific or distribution-specific or language-specific, but Docker has remained the single way to run any cross-platform binary on any of these operating systems with one command. The goal for anyone I'm advising now is to avoid SSH as much as possible — let the platform handle the machines, and let Docker ensure the app runs identically wherever it lands.
The Gap Between Docker Beginners and Experts
Bret Fisher: When you think about the difference between a Docker beginner who can do a basic build and run and thinks that's enough, versus a true Docker expert — what separates them?
Elton Stoneman: When Manning asked me to refresh the book, I thought about who a Docker expert is in 2025. I keep hearing the same story from people who come into my Kubernetes courses, where some Docker knowledge is a prerequisite. First thing we do is talk about everyone's experience, and I always hear: "Yes, I know Docker — I joined a project, I had to learn the Dockerfile and the Compose file, and that was it. I didn't have time to stop and actually understand what those things really are. I just had to pick up the basics and run with it." That's what happens on a project. But the problem is you miss out on all the important things.
You miss the multi-platform support, which is still new to a lot of people. You miss optimization — structuring your Dockerfiles so they work closely with the build cache, making your images build quickly, making them small, tagging them in a way that's usable for your consumers. Security comes in too: because the Docker image format is structured and open, there are tools that can inspect your image and tell you if you have a version of a component that's been compromised. Tools like Trivy have been around for a long time, and a lot of that capability is now built into Docker itself. The idea that you can say "here's my application package, tell me what's vulnerable" — and it will give you a list of CVEs and tell you which ones you actually need to fix and what versions to update to — is a big deal.
And the last one, which is boring but essential, is configuration management. You want a single image that runs everywhere — the exact same binaries in production as in your test environment, so any bugs that come up aren't caused by version differences. You inject different configurations from your platform at runtime. You can get a Dockerfile from Stack Overflow and get Claude to write you a Compose file, but you're going to miss all those finer details.
Bret Fisher: That's a three-step program for container adoption. The config point is really important — it's easy to do it wrong without realising it. Making a separate image for each environment with settings hard-coded in is not ideal, and you might not even know you're doing it. A great companion resource for understanding the why behind all of this is the Twelve-Factor App methodology at 12factor.net — it goes back a long way in the lore of this stuff, but it helps explain why Docker exists and why these patterns matter.
Elton Stoneman: I do reference Twelve-Factor in the book. Look at the date of that document — people were laying out these principles a long time ago, but it's obviously still completely true.
In my consulting work, I still have one foot in the developer world, but I also work with a lot of vendor applications shipped in containers, and their images are terrible. It's not unusual to have a vendor image that's eight gigabytes, and then the next version — just a tiny point release — is another different eight gigabytes, with none of the layer sharing you'd expect. The structural understanding of how Docker images are built is something a lot of people and organisations are still missing.
Bret Fisher: Why have you thrown an entire operating system into this one image? Back in the very early days, when even the people at the forefront of the technology were still figuring it out, I ran into an image once that was a Java app — and Elasticsearch was also installed inside the same image. Elasticsearch is not a small application, and the Java app wasn't small either. The combined image was bigger than the server operating system we were putting it on. Not ideal. Split those things out.
Docker Compose, Multi-Platform Images, and What's in the New Edition
Bret Fisher: Can you quickly explain for those still learning — what's the difference between a multi-platform Docker image and a regular Docker image, and why would you say one over the other?
Elton Stoneman: When I was writing the book, I wanted everything to just work on every machine — Windows laptop, Raspberry Pi, whatever. But that's not really how computers work: you compile an application for a specific CPU architecture, translating your code into something the CPU can understand. Something built to run on Windows on Intel won't run on Linux or on an ARM chip. The same is true with Docker.
What Docker lets you do is build a multi-platform image — Nginx is the classic example. When you do docker run nginx, that name is really an umbrella. Underneath it there's a Windows/Intel package, a Linux/Intel package, a Linux/ARM package, and so on. When the runtime — whether that's the command line, Compose, or Kubernetes — is about to run a container, it knows the OS and CPU architecture of the machine, and it pulls down the correct variant automatically. The multi-platform image is an umbrella that contains those separate builds. It's up to the publisher of the image to make that happen, but BuildKit makes it straightforward to produce, and the consumer never has to think about it. They just run the same command everywhere and get the right thing. That's really powerful when you're moving to the cloud and you want to use those ARM instances that are 40% cheaper — you just run the same software without changing any code, and often without changing your Dockerfile either.
Bret Fisher: Understanding that, and understanding how those manifests connect to the layers of a Docker image — there's a kind of tree structure of files sitting on a web server in the cloud, which is really all Docker Hub is — that's part of what separates a beginner from an expert. BuildKit, the open-source library that's now the standard way to build Docker images, made multi-platform builds much easier and enabled building for different architectures even in parallel. We can now design all of this so that developers can just type docker run, docker compose up, or kubectl apply and know they're getting exactly the right image for their platform, whether that's Linux ARM or Linux Intel.
Bret Fisher: When you think about the difference between a Docker beginner who can do a basic build and run and thinks that's enough, versus a true Docker expert — what separates them?
Elton Stoneman: When Manning asked me to refresh the book, I thought about who a Docker expert is in 2025. I keep hearing the same story from people who come into my Kubernetes courses, where some Docker knowledge is a prerequisite. First thing we do is talk about everyone's experience, and I always hear: "Yes, I know Docker — I joined a project, I had to learn the Dockerfile and the Compose file, and that was it. I didn't have time to stop and actually understand what those things really are. I just had to pick up the basics and run with it." That's what happens on a project. But the problem is you miss out on all the important things.
You miss the multi-platform support, which is still new to a lot of people. You miss optimization — structuring your Dockerfiles so they work closely with the build cache, making your images build quickly, making them small, tagging them in a way that's usable for your consumers. Security comes in too: because the Docker image format is structured and open, there are tools that can inspect your image and tell you if you have a version of a component that's been compromised. Tools like Trivy have been around for a long time, and a lot of that capability is now built into Docker itself. The idea that you can say "here's my application package, tell me what's vulnerable" — and it will give you a list of CVEs and tell you which ones you actually need to fix and what versions to update to — is a big deal.
And the last one, which is boring but essential, is configuration management. You want a single image that runs everywhere — the exact same binaries in production as in your test environment, so any bugs that come up aren't caused by version differences. You inject different configurations from your platform at runtime. You can get a Dockerfile from Stack Overflow and get Claude to write you a Compose file, but you're going to miss all those finer details.
Elton Stoneman: Yes — and the new edition covers CI/CD too, with GitHub Actions as the example. The idea is that your Dockerfile and Compose file are the foundation, so a developer can run the same build command on their laptop and the output will be identical to what comes out of GitHub Actions or Jenkins or whatever you're using. Portability not just at runtime, but at build time too.
The book is also structured so you can pick up any chapter you like. It builds from zero if you're completely new to Docker, but if you're already comfortable with the basics you can jump straight in. There are 12 chapters — each one is designed to be something you can work through in a lunch hour. There's some theory, but not too much; it's mostly practical exercises, so you learn through doing. For this edition I've gone through and made sure all of those exercises work across Windows, Mac, Intel, and ARM — everything just works.
It really tries to give you an authentic journey — from getting things running on your machine all the way through to your options for deploying to production on different cloud platforms. It covers Kubernetes in depth, but also the cases where you don't have to go straight to Kubernetes. I love Kubernetes, but when I first saw it coming from Docker Swarm I thought "whoa, this is very good but just so complicated." The more you learn it, the more you see that the complexity is really just your gateway to the power of the tool. But it doesn't fit every situation, so the book tries to help you understand where each option makes sense.
The Road Ahead for Containers
Bret Fisher: Docker Compose has also seen major updates over the last five years. We went from docker-compose to docker compose, and for a while people were typing one and secretly getting the other without even knowing it. Is there an update in the book about Compose?
Elton Stoneman: Yes, there are several chapters on Compose. The main thing with Compose is that it's a great way to have a cut-down version of a complicated distributed application running locally. You can run things at scale, make sure the stateless parts work correctly — but it's also a great build tool. Even if you're not going to run Compose in production because you're using Kubernetes, Compose is a great way to say "here are all the components, here's how we build them together." A single docker compose build command builds them all, pushes them all to your registry, and gives you that consistency: the Dockerfile defines each component, the Compose file explains how the whole application fits together.
I did have to go through and remove all the hyphens and make sure every command worked the way it used to. The core structure — describing your applications, networking, storage, and configuration — hasn't changed. It's a really nice way of modeling your app locally, and a great stepping stone to Kubernetes, Google Cloud Run, or whatever you're using in production. Some cloud platforms even support the Compose format directly — you can take your Compose file, send it up, and get a working starting point. The learning path is still: Dockerfile, then Compose, then whatever you're doing in production.
Bret Fisher: For those learning Docker now — is this still a smart career investment? The signs seem strong to me. All our content around Docker still gets consumed, and the community keeps growing. Whether you're an engineer or an operator, it's a good thing to learn.
Elton Stoneman: Absolutely. Because Docker is such a fundamental building block, when new patterns come and go, Docker is still there underneath. When serverless was the big thing — which people talk about a lot less now — Docker was powering it. Docker Desktop has embraced AI in ways you'd never have imagined. If you plug in your AI model as a separate component, you've got your MCP servers that you're going to run in Docker now. There's really no end to that runway.
I remember telling my wife after writing the first Docker book that I wanted to write something similar about Kubernetes. I looked at the amount of content I'd have to write, all the effort involved, and I thought: this is a lot, but probably five years of people still using Kubernetes. That was ten years ago, and it's stronger than ever.
Whatever comes next is going to have to support the Kubernetes API. Whatever the next platform looks like, you'll be able to take your Kubernetes models and your Docker images and run them there — these ecosystems bring everyone with them. Learning this stuff is going to be good for you until we all get replaced by Google.
Bret Fisher: Yeah — might not be that long.
Bret Fisher: Use the discount code GOTOstoneman3 for 45% off the book at Manning. You can find Elton Stoneman at his blog, blog.sixeyed.com, or on LinkedIn — he's stepped away from most other social media and says his life is better for it. Thanks so much for being here, Elton.
Elton Stoneman: Thanks, Brett. Take care.
About the speakers
Elton Stoneman ( author )
Bret Fisher ( expert )
Docker Captain, DevOps Trainer and Consultant