Kubernetes at the Edge
You need to be signed in to add a collection
Hannah Foxwell talks Kubernetes at the Edge with Charles Humble - from farm drones to stroke care AI, sustainability, and why the tech industry must do better.
Transcript
What Is Edge Computing - And Why Does It Matter?
Hannah Foxwell: Welcome to this episode of the Goto Book Club. My name is Hannah Foxwell and I'm going to be talking to Charles Humble. Charles speaks and advises on how to build software better. He's passionate about sustainability and ethics and is a specialist in cloud computing, remote working, and diversity and inclusion. His mission is to inspire the next generation of developers. He's also a podcast host — you probably know Charles already as the host of the Goto Podcast. But for those who don't, Charles, please introduce yourself.
Charles Humble: I've been interested in computers since the 1970s, when I was a small boy still wearing short trousers. I've been involved in the industry professionally for over 30 years, during which I've done pretty much everything you can do. I started in desktop support and made my way up to programmer, senior programmer, architect, and CTO.
In 2018 or so, I switched to writing about software more than writing software. I did that first as chief editor at InfoQ and then at a company called Container Solutions. Now I write and talk about technology for various publications — I write for InfoQ, I speak a lot at conferences, and I run a training and consulting business mainly around green software, but also increasingly around the kind of skills needed for a long-term career in tech. Those are often what people call soft skills, though I hate that term.
I also write the AI for the Rest of Us newsletter, which everyone should subscribe to and share. And in my copious spare time, I'm a musician and composer — half of an ambient band called Two Fish. We're more than halfway through what will probably be our second album, which I'm very excited about. Computing and music are two of the things I enjoy most.
Hannah Foxwell: We can't wait for the next Two Fish album! It's an absolute pleasure. Charles and I write a newsletter together every week, and it was on a podcast — where Charles was actually interviewing me — that we first met. So it's lovely to switch sides today. Thank you.
Hannah Foxwell: Today we're talking about your book, Kubernetes at the Edge. If you're listening to this, you probably have an idea about what Kubernetes is and what we mean by edge. But having read the book, I realise the edge can be many different things in different contexts. Charles, when you talk about edge computing in the book, what do you mean by it?
Charles Humble: It's actually a really annoying term because it means so many different things, and we're very bad at naming things in this industry. In the book, when we talk about edge, we really mean a location — a place where computing and storage happens. We're mostly focusing on what we might call far edge: infrastructure placed in immediate proximity to devices and users. That's things like the base of a cell tower or a point-of-sale system in a retail or restaurant context.
We also talk about device edge — sensors and controllers with specialist functions, like industrial sensors or medical imaging scanners. We don't talk so much about the near edge, which is small localised data centres, although that's an area I'm doing quite a bit of research on currently in the context of AI and federated learning. And we talk about edge in the sense of users accessing a network via a VPN or similar, which is edge in the network topology sense, as distinct from edge in the physical location sense.
Hannah Foxwell: When you asked me to interview you about this book, my immediate thought went back to a project I was asked to help scope when I was working at VMware — putting Kubernetes on a boat in the middle of the ocean. Could we put Kubernetes in the little server room on an aircraft carrier? It came with all sorts of interesting challenges, like not being able to rely on internet connectivity. It was important that the computing systems on these vessels could work independently and offline. That was my first taste of the very different concerns and challenges you encounter at the edge. Have you come across any particularly surprising edge deployments?
Charles Humble: There are a huge number of interesting places where Kubernetes or edge devices are used. Funnily enough, I did have a boat story too — in a military context where they had essentially two mini server rooms physically located in different parts of the ship, so you could failover from one to the other. It's a miniature disaster recovery setup: if one "data centre" floods, you carry on operating.
Some of my favourite examples are ones you might not think of — like agriculture. I have a precision agriculture case study in the book. Tractors and combine harvesters these days are basically just huge computers on wheels, and the technology underpinning them is super interesting in terms of what it allows us to do with crop yield. There's also a lot of cool stuff in renewables — wind turbines and those kinds of things. It's a very broad category of very interesting applications.
Inside the Book: Use Cases, Vendors, and Architecture
Hannah Foxwell: The last time you were on the Book Club, you were talking to Trisha Gee about professional skills for software developers — a fairly different topic. How did you end up writing about Kubernetes at the Edge?
Charles Humble: The honest answer is that The New Stack asked me to. But the reason I said yes was actually personal. I mentioned falling in love with computers in the 1970s. My father worked for IBM, and when I was about eight or nine years old, he took me to the machine room at Greenford. They had something called a Mass Storage System — a robotic tape library. It stores tape in bulk cartridges arranged in a hexagonal formation, like a honeycomb, and a robotic arm comes along, grabs a cartridge, disassembles it, and reads it. Watching that as a nine-year-old was just the coolest thing I'd ever seen. I find computers and the way they interact with the real world genuinely fascinating.
I'll also be honest that at the point The New Stack asked me to write this, I was in quite a gloomy place about our industry — particularly around generative AI. Working on this book helped me realise there is a lot of really good work out there having a genuinely positive impact on humanity and the natural world. That was quite an inspiring thing to discover, and it's a big part of why I took it on.
Hannah Foxwell: One of the things I like about the book is that it's short and practical. As an engineer, you might pick it up and think: this is how I should be thinking about this problem. For folks who haven't read it, how would you describe the structure?
Charles Humble: It's a short e-book you can read in a morning if you sit down and focus. We split it into four chapters. We start with the definition — what is the edge, what is an edge device. Then we go into advantages and disadvantages. The advantages include reduced latency, fast response times, improved network bandwidth efficiency, and a lower carbon footprint. The disadvantages are largely about constraints: edge devices are getting more capable, but they still have size, weight, and power limitations, which means less CPU and GPU than you're used to. And as you mentioned with the boat context, you also have intermittent network connectivity. A lot of software is built assuming always-connected networks, and that's simply not the reality at the tactical edge.
We also cover some history, which I love — the first IoT devices, the first webcams. It nicely illustrates the fact that our industry moves forward largely because programmers are lazy. Many of our breakthroughs came from a programmer thinking: I don't want to walk over to the coffee machine to see if there's coffee — I'll just invent a webcam so I can check from my desk. That appeals to my sense of humour.
The second chapter is a set of case studies — precision agriculture, JYSK which is a furniture retailer, and several others. The third chapter is effectively an analyst report. I started with a shortlist of twelve vendors in the Kubernetes edge space and narrowed it down to seven, then did detailed comparisons of their strengths and weaknesses. If you're in a greenfield situation and wondering which vendor to choose, that chapter should help you draw up your shortlist. The final chapter is about setting you up for success: patterns you can use for edge deployments, the skills you need in the team, how to structure that team, the importance of data operations — all of that.
Hannah Foxwell: It's very practical, not theoretical. I really appreciated the section on team design and the considerations that come with it. Can you go deeper on some of those use cases — particularly why industries like agriculture or healthcare actually need Kubernetes at the edge?
Charles Humble: Agriculture is a fascinating one. The background is a human population story. The UN reckons the global population will peak somewhere around 10 or 11 billion in the 2080s — we're at about 8 billion now. As that population continues to grow, farmers are also under pressure to produce food with less reliance on fossil fuels, fewer pesticides and herbicides, and against a backdrop of climate change — more extreme weather events, degrading arable land, and increasing water scarcity in parts of the world.
Meanwhile, in most parts of the world we've been able to massively increase crop yield per hectare — but that hasn't happened in places like sub-Saharan Africa. So there's a really interesting question: can you use technology to increase yields in a smarter way, rather than just the brute-force approach of pumping vast amounts of fertiliser onto fields?
What you can do with drones and image processing, for example, is build a detailed map of a field and run an ML process that can detect weeds or identify when a crop is stressed, then apply water, nutrients, or treatment to a precisely targeted area rather than the whole field. Given climate change, the ability to farm more precisely and efficiently is going to be really, really important.
I live in a fairly rural part of Surrey, and talking to local farmers, every year it's the same: this is the worst year we've ever had, the wettest at planting time — things we're just hearing on the ground over and over again.
Hannah Foxwell: It's also an inspiring example because the challenge isn't just whether the technology exists — it's whether it can be deployed cost-effectively. Does the return on a one or two percent yield increase justify the investment? Then it becomes about the efficiency, durability, and resilience of the technology itself. How do we actually get it into the hands of the farmers who feed us all?
Charles Humble: There's another example I love: RapidAI. They're a healthcare company focused on ischaemic strokes. What they've worked out is that they can use ML-based image processing from MRI scans to extend the treatment window for stroke victims. Historically, that window was about six hours — after which the chances of recovery dropped dramatically. RapidAI have extended that window to around 24 hours, which is phenomenal. If you have a stroke in a rural area far from a specialist hospital, that extended window is absolutely profound.
A skilled clinician can do the assessment manually, but it's very hard to scale. And it turns out that image processing and machine learning are as good or better at it. There's no obvious downside. That's the kind of application that genuinely excites me.
Hannah Foxwell: I have a perhaps less inspiring story from retail. Physical stores still exist, and the technology expectations of customers have kept rising. I started my career at Tesco and spent my first month working in a store — as I think all product people should. At the time, they still had in-store servers with green screens managing orders and stock takes. Old-fashioned even then.
Then came a wave of centralisation — those applications were rewritten in modern technologies. But a new challenge emerged: supermarkets are essentially Faraday cages. Internet access to those centralised systems across all the stores was unreliable. So there had to be a second wave of change where every store's network was upgraded, just so the technology on the shop floor could actually connect. And eventually, we started discussing whether the point-of-sale systems should be deployed as Kubernetes clusters at the edge. I'm not entirely sure where that conversation landed — but the point is, there are often multiple waves of innovation that all have to come together before you can deliver a seamless experience to the customer.
Charles Humble: I love the fact that you had to go and work in store for a month. You see so much you wouldn't otherwise see. I did a couple of projects with B&Q — like Home Depot for Americans — and one of them came directly from watching the night crew work. Head office would send down a pricing update. The store would print a load of labels. The night crew would then go and put them all up — except the labels were printed in alphabetical order, not in store location order. So they were spending hours walking back and forth across the entire store for each pricing change.
We worked out that if we could sort the labels into even a rough approximation of physical store order, we could save those people hours every single time. It turned out to be harder to solve than I expected — but the point is, we only spotted the problem because an IT person was actually in the store, watching what was happening. So many technology problems come down to: talk to your users. Go and spend a day in the life of your user.
Hannah Foxwell: Timeless advice. Sticking with retail — one of the case studies in the book is JYSK. You talk about day-two operations, which are often underestimated. What should people think about when planning an edge deployment of Kubernetes?
Charles Humble: This is a great question, and the JYSK team were very honest about it: they initially got some of it wrong. A lot of it comes down to automation — or a lack of it. In smaller edge deployments, what you often see is companies relying on physical deployment: you assemble a bespoke set of resources for each application, trunk a device out to every individual site, and have an installer set it up. That creates friction every time you deploy a new application, and you end up with a disparate collection of systems you have to manage and monitor separately.
A slight step up is virtualisation — two virtualised hosts, a couple of switches, a storage area network. Better than physical trunk rolls, but still more 'pets than cattle' in the DevOps sense. You still have a lot of custom configurations.
What you really want to do is extend your infrastructure-as-code practices to your edge environments. Use a common platform. Version-control your infrastructure deployments. Streamline configuration. Reduce deployment times. Have common monitoring so everything works in a predictable way. It's fundamentally about taking the best practices from the DevOps world — continuous deployment, continuous integration, lots of automation — and extending them to edge environments so you're not relying on people physically visiting stores or wherever your edge devices are.
Hannah Foxwell: Absolutely. Being able to respond to an incident in a remote location requires thinking about the right tools and processes — especially in sensitive environments where physical access isn't freely granted.
Let's talk about the nuts and bolts. You compare a number of vendors in the book. For someone embarking on an edge project, how do you compare them, and how do you decide which is right for you?
Charles Humble:It depends a lot on your individual circumstances. I list seven vendors in the book, having narrowed down from a longer shortlist of about twelve. Those seven are: Red Hat's Device Edge, Spectro Cloud, Talos Linux (from Sidero Labs), k3s, SUSE, Scale Computing, and Roxanne.
If you're in a greenfield situation, I'd start by looking at the specialists. Of the two I was most impressed by, Sidero Labs' Talos Linux is very opinionated — they have something called Talos Linux that's very deliberately engineered for edge deployments. A side effect is that it has quite a steep learning curve. If you're used to Linux, you'll arrive and find there's no bash, no SSH, no systemd — none of the usual Linux conventions. Think of it like an operating system that follows the GitOps principle of declaring desired state, and the platform figures out how to get there. The security posture is extremely strong. I'd have it on my shortlist almost regardless.
Spectro Cloud is also an extremely strong product. The one area that was slightly limited when I reviewed it was workload acceleration — GPU support. That said, RapidAI, the healthcare company we mentioned, use Spectro Cloud for all of their deployments, so it's clearly not a dealbreaker for them.
The enterprise players like Red Hat and SUSE are leveraging their existing Linux heritage. If you're already a Red Hat or SUSE shop, using their edge offerings makes a lot of sense — you know the technology, you have the purchasing relationship, and they're genuinely strong products. The slight caveat is that they don't have quite the same focus that a specialist like Sidero Labs or Spectro Cloud has, because they're doing many things of which edge is just one. But if you're already in that ecosystem, it may well be the right path. In the end, think about your workload: if you're doing a lot of machine learning, how good is the GPU support? That varies quite a bit across vendors.
Hannah Foxwell: As with all technology decisions, the answer is: it depends.
Charles Humble: Yes — every consultant's favourite answer.
Sustainability, Green Software, and Edge Computing
Hannah Foxwell: I want to switch to one of your favourite topics: sustainability. You're genuinely one of the most informed people I know on this subject — you've done talks on sustainability in software development and in machine learning. You've also written The Developer's Guide to Cloud Infrastructure Efficiency and Sustainability. In that book, you mention that network traffic accounts for roughly half of the IT industry's carbon emissions. There's an obvious relationship here — can you talk about how edge computing helps with that?
Charles Humble: More than half of all the energy the IT sector is responsible for comes from network transmission. So if you can run processing locally and not send everything up to the cloud, you reduce the amount of network traffic you're generating. That's fairly obvious.
If you think about machine learning specifically — the traditional model for training is to gather all your data, push it up to some data centre, and go from there. But there's another way: training on device using a technique called federated learning. It originally came out of privacy research, because keeping data on the device means only the user can access it. But it also has sustainability benefits, partly due to networking and partly due to cooling.
On cooling: in a data centre, more than half of all the energy used goes on cooling the hardware. The most efficient data centres in the world — think Google or similar — have a Power Usage Effectiveness of about 1.6, meaning 60% of the electricity bill is just cooling. That's also why we tend to favour water-based cooling, because water is more efficient. But that creates a problem in water-scarce regions.
In a federated learning context, you send a foundation model down to a cohort of devices, train them locally, and then send back just the deltas — the differences. You're not shipping nearly as much data. It's slower to converge, but it can work out very well.
One thing I'm researching at the moment — and I'm genuinely not sure how it'll pan out — is the use of near-edge data centres: small data centres in municipal areas, perhaps next to a swimming pool. The idea is you put the data centre close to where people are, and the heat it generates as a side effect can be used to heat homes or local facilities. These won't be running large language model training — you don't want that heat next door, oddly enough — but paired with federated learning, they could form a distributed training network that's genuinely more efficient.
Hannah Foxwell: There was actually a BBC article about the move towards smaller, distributed data centres just a couple of weeks ago. There's also a growing trend around data sovereignty and security concerns — self-hosting is starting to look more appealing to a lot of companies.
Charles Humble: Yes. The geopolitical landscape in Europe has changed significantly in the past couple of years, for reasons that are self-evident. A side effect is that many companies I speak to are now talking about data sovereignty — sovereign cloud — in ways they weren't a couple of years ago. I find this interesting because I think there's an opportunity for Europe to develop more of its own technology, rather than remaining quite so reliant on the big players in the US and China. Whether we'll take that opportunity, I don't know. But from a privacy point of view, having your data in a region where you understand the rules and how it's being used is just self-evidently a good thing.
Hannah Foxwell: In your sustainability book, you argue that operations-first trumps coding efficiency when it comes to sustainability. Can you explain that, and how it relates to edge?
Charles Humble: The default programmer response when thinking about efficiency is to make the code more efficient. And most of the time, that's not actually the right first move. The reason has to do with something called energy proportionality — which is a terrible name because it should really be called energy disproportionality. When you turn a server on, it has a static power draw, and that draw is significant. A server sitting completely idle is already using large amounts of electricity. For most servers, the optimal utilisation window is roughly between 50 and 90%. Above about 80%, you start getting CPU contention and other issues. Below about 50%, you're using more electricity than you should be relative to the work you're doing.
In most data centres, more than half the hardware isn't doing anything at all. You could literally turn it off and no one would notice. No amount of making your code more efficient will make a meaningful difference when you have a data centre full of idle machines. So in most cases, operations is absolutely where to start.
The other dimension is the language question. The most energy-efficient programming languages are basically C, C++, and Rust. There are good reasons to choose those — if you're writing an operating system or speed is genuinely the primary constraint, they're the right tools. But most business applications are written in Java, C#, Go, or Python, because time to value matters more than raw execution speed. If you go to your business stakeholders and say 'we want to be more environmentally responsible, so we're going to rewrite all our Java in C++,' you're not going to get sign-off. And frankly, they'd be right not to.
At the edge, there is more hardware constraint — you do have less CPU and GPU, so code efficiency matters somewhat more. But it still matters far less than making sure you have a good asset inventory, understanding what you've got running, decommissioning hardware that's past its useful life, and thinking about embodied carbon. That's all fundamentally operations work.
Hannah Foxwell: Over-provisioning for the 'just in case' scenario is completely the norm. Utilisation is incredibly low because no one is good at modelling what they actually need, and everyone worries about an event where they suddenly don't have enough. I'm seeing it now with GPUs. Teams procure them because everyone's heard they need GPUs, and then they sit idle because the projects stall before reaching production — which is very common with AI projects that are still in the innovation and experimentation phase.
Charles Humble: It's absolutely a problem at the edge too. If you don't have a reasonably comprehensive asset inventory, it's amazingly easy to lose track of what you have running in remote locations. I'm sure you've had the experience of walking into an office somewhere and finding a server running in a corner that's been there for years, and nobody quite knows what it does.
Having an asset inventory sounds unglamorous, but it makes an enormous difference. Something as simple as a spreadsheet with device type, manufacturer, model, MAC address, IP address, physical location, installation date, and who's responsible for it — that's a really good start. Think about end-of-life dates so you can identify devices that are no longer receiving security updates. And do that because edge devices on your network are wonderfully attractive targets for bad actors. Regular audits — monthly or quarterly — comparing actual connections against your inventory and investigating undocumented or inactive devices. Walk through premises and ceiling spaces looking for forgotten printers, cameras, and old hardware that's just sitting there. Network segmentation and zero-trust approaches help on the security side. But a lot of it is simply knowing what you've deployed, when you deployed it, and where.
Hannah Foxwell: It's familiar from software delivery projects too. You do a migration, you do a careful cut-over so you can roll back, the new platform goes live, and everyone gets excited about what's next. The decommissioning of the old thing gets de-prioritised because there's always more to do on the new platform. And you end up just adding, never subtracting.
Charles Humble:It's much easier to add than to take away. There is a recognised approach to decommissioning hardware, though — the scream test. You turn it off and see if anyone screams. I met someone at a conference in Copenhagen who worked for a bank. They have a chaos monkey-style system that monitors server activity for a week, and if a server hasn't done anything in seven days, it turns it off automatically. And rarely does anything happen. That's just brilliant. The reason organisations don't do it is because there's always this lingering worry that even if you shut a server down and it wasn't needed, bringing it back up might not restore it to exactly the same state. But that's a solvable engineering problem.
As an industry, I feel we need to take our responsibilities more seriously. None of this is particularly hard — some of it is genuinely interesting from an architectural perspective, but the principles aren't difficult. We ought to be doing better.
AI, Responsibility, and the Future of the Industry
Hannah Foxwell: The explosion of interest in AI and large language models over the past couple of years feels like it's taken us a step or two backwards on sustainability. There was a moment where FinOps and GreenOps were almost the same thing — being more sustainable meant lower cloud bills. But now we're in a world where people are reaching for large language models for everything because it's the shiny new tool. What's your perspective, and what do you think a more sustainable approach to AI engineering might look like?
Charles Humble: You're absolutely right. Two years ago I was having a lot of conversations with companies taking both their costs and their carbon footprint seriously. And then the generative AI wave hit, and those conversations largely stopped. It's a bit depressing. The underlying logic still holds — in every company I've looked at, you can meaningfully reduce the cloud bill just by getting rid of things you don't need. Cost isn't a perfect proxy for carbon, but it's not a bad one: if you halve your cloud bill, you've probably halved your emissions too.
The large language model landscape is complicated. There are things that are genuinely interesting and exciting. But as a software developer — and I think most people listening are, or work closely with developers — I don't think you have the option to simply opt out. This is a bit like the shift from punch cards to keyboards and monitors: it's one of those changes in how software is built that's a really big deal. We haven't had a shift like this for a very long time.
In terms of making AI more sustainable, there are standard green techniques that apply. First: do you actually need a large language model, or would a smaller, more specialist model work better? In many business cases, that will actually perform better for your specific task anyway. There's also good research — we referenced it in our newsletter — showing that smaller, specialist language models produce greater reliability and more accurate execution, particularly in agentic workflows. The proof-of-concept instinct is to reach for the model that does everything. But what you actually want for production is the model that does the thing you need, and that may well not be a general-purpose large language model.
There's also a lot of active work on compression techniques — distillation, quantisation, pruning — that are allowing us to shrink existing models. Mixture-of-experts approaches, where a small model handles most queries and a larger model handles the exceptions, will help too. But I do think we need to be much more intentional as an industry: is this technology solving a real problem, or are we just playing?
I had a conversation with a prospective client about fourteen months ago. They'd launched a new version of their product and it wasn't selling well. They thought it was a messaging problem. On the first call, I asked what the number one piece of customer feedback was on the new product. The answer was: 'How do we turn the AI off?' So I asked why users couldn't turn it off. Answer: they can't. I said: your problem isn't sales and marketing. Your problem is that you built something your customers don't want.
Hannah Foxwell: And there are lessons we should be drawing from other technology shifts — like how social media has changed the fabric of society. We can't just say 'it wasn't my decision, I just write the code.' All of us need to be part of that conversation because it affects us all, directly or indirectly.
Charles Humble: That's one of the things I was so excited about when I got involved with AI for the Rest of Us — it's about broadening that conversation to people who aren't necessarily developers or at the technical forefront. This is genuinely important. As a society, the implications of what we're building are profound and deeply unpredictable.
I want to be clear that I find large language models genuinely interesting and I use them for coding exercises and other things. But we do need to be honest about the ethical and moral questions. There is a lot that's been done in training these models that is indefensible. As an industry, we have effectively used the intellectual property of every creator on the planet without permission or compensation, and built commercial businesses on it. That's not okay. We could have licensed it. We could have had the conversation about fair compensation. We didn't. And I think there will be a reckoning — not least because as more people understand what's happened, we are not going to be a particularly popular industry.
What worries me is that we might end up with technology that's genuinely useful and could save lives and do extraordinary things, but find ourselves so regulated — because we were so irresponsible — that we can't use it. I feel like we as technologists need to think carefully about what we're building and why, and take real responsibility for it.
Hannah Foxwell: And we do have examples of what happens when we don't — social media being the obvious one. I think all of us in tech need to own a share of that responsibility, regardless of whether we built the specific thing.
Charles Humble: Absolutely. And understanding how the technology actually works is part of that. When you understand how large language models work — that it's very sophisticated floating-point arithmetic, not some emerging sentient intelligence — the magic of the parlour trick fades a little, but you gain something more important: the ability to reason clearly about what it can and can't do, and what it should and shouldn't be used for. Understanding how a magic trick is done doesn't stop you appreciating the skill involved. It might actually make you more thoughtful about it.
Hannah Foxwell: A huge thank you, Charles, for sharing your wisdom with us today. Is there anything you'd like to leave people with, and where can they find you?
Charles Humble: The message I'd like to leave people with is this: I genuinely believe this is the most important — and possibly the most exciting — moment in our industry that I can remember. And I've been around the block for a long time. This is a really, really big moment, and all of us need to get involved. Think about how you want this to play out — ethically, morally, environmentally. Everyone who works in tech has influence over how that happens. Think about whether there's a way you can use this technology for good, whether that's in healthcare, farming, or whatever matters to you personally.
In terms of finding me: my company is called Conissaunce. You can find the website at conissaunce.com. I'm on LinkedIn and increasingly also on Bluesky, both as Charles Humble. If anyone wants to talk about technology, has an interesting story to share, or wants to discuss sustainability or any of the themes we've covered today — please do reach out. I'm always happy to chat.
Hannah Foxwell: What a wonderful way to wrap up. Thank you again, Charles. It's been fascinating as always.
Charles Humble: Thanks, Hannah. Always a pleasure.
About the speakers
Charles Humble ( author )
Freelance Techie, Podcaster, Editor, Author & Consultant
Hannah Foxwell ( interviewer )