IN
All
Articles
Books
Experts
Videos

Decoding Modern Tech: Cloud, APIs, Wasm, Security, & More

Updated on January 22, 2024
Daniel Bryant
Daniel Bryant ( expert )

Independent Technical Consultant

Matt Turner
Matt Turner ( expert )

DevOps Leader and Software Engineer at Tetrate

52 min read

Join two cloud native experts and passionate adopters of modern tech as they explore the shifting role and impact of APIs. They go beyond the usual tech stack to touch on key aspects of the modern infrastructure and software development space like platform engineering, mechanical sympathy, and the role that Wasm could play in this. Daniel Bryant and Matt Turner will share some of the important but not-so-well-known best practices and questions that one might ask to make sure they are building the right thing with the right tools.

Intro

Matt Turner: Hello. So we're here at GOTO Amsterdam 2023 in a sunny Amsterdam, Netherlands. And I'm here to do a "GOTO Unscripted." So I'm Matt Turner from Tetrate, and I'm going to be talking to Daniel Bryant. Welcome, Daniel.

Daniel Bryant: Thanks, Matt. Hi, everyone. I'm Daniel Bryant. I'm an independent consultant at the moment, having worked with Ambassador Labs for a number of years before, and looking forward to chatting with my buddy, Matt.

Matt Turner: Thanks, Dan. Yeah. So I'm a software engineer at Tetrate. I guess this is a broad software engineering conference, so just to step back for the folks who don't know, I guess, you know, both of our backgrounds are sort of cloud-native stuff, a little more cloudy and modern side of the tech, and we will be probably talking mostly about infrastructure, kind of things, I imagine. Just to level set for folks that this isn't going to be like a React chat, I guess. It is unscripted. We really haven't unscripted it.

Daniel Bryant: Yes.

Matt Turner: We really haven't scripted it at all.

Daniel Bryant: We have got interesting conversations all the time, Matt, so I'm just thinking...

Matt Turner: This time we're sober, so this might even make sense. No, I mean, we literally just came out of the back of your talk out here, which obviously folks can see on the website when it goes live, which, yeah, I thought was really good, really interesting.

Daniel Bryant: Appreciate it.

Recommended talk: The Busy Platform Engineers Guide to API Gateways • Daniel Bryant • GOTO 2023

Matt Turner: I like the way you, you know, set the stage and sort of stepped back and said, you know, "Why are we talking about all of this technology? What is it? Where does it fit into your organization? What's your organization going to do? What's it trying to achieve?" 

Exploring API Gateway

Matt Turner: This is, I know it's unscripted, I am technically meant to be interviewing you, which means you do more talking than me, which is good. I don't know, can we kind of stitch you up right at the start and say, what is an API gateway? Let's talk about that topic for a bit.

Daniel Bryant: It’'s a great question. Because someone actually at the end of the talk asked about, like, "Is an NGINX in front of my microservice an API gateway?" And I was like, "Yes, it's a good question." I wanted you to define it a bit clearer actually in the talk because I think, like, both you and I have worked in the space for a long time, Matt, right? And an API gateway means many things to many people. Where you're doing your traffic management, particularly your north-south traffic management, your ingress traffic management. And I think, over the years, an API gateway used to be very focused on APIs, sounds obvious, but now I definitely think folks in some ways are, like, "I'm just mapping routes." Or, you know, just exposing ports and, like, whatever protocol, right? But I think the history is around actually the API management space. So people will actually create and say REST APIs in particular. We've been seeing some, protocol buff and that kind of stuff. But mainly REST APIs back in the day, like, sort of the soap era, people were like, "This actually is really critical to the business, and therefore I need to manage this thing." And it was, like, "Where do we manage it?" Well, where the traffic's flowing in, and that's, like, the API management. And it's a gateway, API gateway was kind of born.

But I think these days, like, we see in the Kubernetes land, like, people even call NGINX an API gateway. And, like, initially, I resisted that because I was, like, NGINX is a proxy, right? Like, you know, to manage traffic, right? But now I kind of get it, like, you are putting APIs at the edge of your system, and you are, like, controlling the access with NGINX, HAproxy, Istio. These days we'll, like, pick, you know, take a pick, right? And, like, even Istio as a service mesh has an API gateway, sort of a gateway, at least component, right? So I think it means many things to many folks. And this is sort of one part of the time or one time I should say where I'm not too hung up on the terminology. You know, you and I have talked about this before, that sometimes terminology and words matter a lot, right? And I think they still do, but I wouldn't call something that's, like, obviously not an API gateway an API gateway. But I was more concerned today with saying to folks, if you're managing cloud-native traffic, like, you're gonna be using something that is an API gateway to get user traffic into your backend systems. There'll be other things involved, service mesh, which is very much your expertise, Matt, CNI, SDN, that kind of stuff. But the API gateway is where the rubber meets the road, where the users hit the backend systems, right?

API Gateway Functionality

Matt Turner: I think, for me, the word management does a lot of lifting there, right? Because why do you even need to manage it? And I think to your point that you made in your talk is that people smooth business logic into that, just like they did with BSPs, just like they did with aspects and aspects were in particular programming, just like they did into utility classes, you know. We've seen this before, and we've all done it now to smooth logic into the API gateway as well. What do you even need to manage if your API is... If the system serving your API is perfect, then maybe it could just be a router, maybe it is just a layer four device, right? That it's just a tunnel. But, realistically, you probably want some extra security like a WAF, you want bot blocking, you want load shedding. If you've got a fat gRPC server and a service mesh in front of that and you can do all of these things, you know, a proxy, a sidecar proxy that can do OIDC challenges, then maybe you don't need it. But I think that's what we often tend to mean by management.

And then you do get more into the, I want to publish an API to this thing, and I'm gonna give you a schema, a version schema, hopefully, with a, you know, schema for the body. Please reject, please fail fast and, you know, reject request that come in too quickly, reject request that don't meet my body.

Daniel Bryant: Yes, totally.

Matt Turner: Or please transform body version one to body version two. Oh, okay, fine. Please patch this error in the program because it was quicker to reconfigure the API gateway than to reconfigure the server.

Daniel Bryant: So that would Log4j on that cluster, right?

Matt Turner: Well, I was about to say, I think that's where we hit an anti-passing, right? Because that's business logic. But, actually, we helped a lot of folks at Tetrate how to do a plug. We helped a lot of folks to mitigate Log4j because you can match the headers and the bodies, and it was really quick to... If you have sidecars, then you can do it there as well. So we did both, but why not drop it at the edge too? And I think that's where the gateway API comes in because you've got a unified management, you know, API, unified control plane for both components. But, yeah, management does, you know, a lot of heavy liftings as a word. But I think I agree. To me, it's a proxy with features.

Daniel Bryant: I like that, yes. Because one talk I did, I actually had like, a, almost like a spectrum, right? And I had like, that sort of classic, you know, NGINX, OpenREST, the HAProxy, Envoy proxy, all the way through to Apigee 3scale, like, heavyweight kind of business stuff, right? And then you can somewhere on that spectrum pick where your API gateway is, right?

Matt Turner: Well, it's like do I have no features? Did I just download, say, Envoy? You know, do I have the open-source features in terms of a bit of rate limiting or something?

Daniel Bryant: That's right.

Matt Turner: So I have the paid-for features that folks are starting to add, like advanced bar blocking or schema.

Daniel Bryant: Security observes.

Matt Turner: Security, like, mass API management. What's critical?

Daniel Bryant: Pay money for that, right?

Matt Turner: Well, it's critical. It's where, it's what you needed if you're regulated. So folks, to your capitalism point, folks do take the opportunity to make money off that. But also it's the high-value stuff. It's the stuff that takes a lot of engineering work for the companies to build. So I think, you know, it's the kind of thing where you would make a bill decision, even if there was a dollar cost attached to it, a buy decision, sorry if there was a dollar cost attached rather than a bill.

Daniel Bryant: It's actually directly translating to the value you would offer as a business, right? Or in costs you would incur.

Matt Turner: Yes, higher costs you would incur because it's a difficult thing to engineer. Yes, interesting, interesting.

Observability and Security

Daniel Bryant: Something you touched on, Matt Turner, that I thought was curious there is, like, we talk a lot about ingress with API gateways, right? But, like, you mentioned the example with the Log4Shell, Log4j issue. That was a lot about egress, right? In terms of, like, scanning stuff as it was leaving the environment. So I think that's something interesting with API gateways. A lot of folks focus on the ingress, but you can also look at sort of what's egressing from your systems there as well, right? I don't know if you've got an experience around some, like, observability in that and regard or even things like security. I know a bunch of folks have been chatting online around in particular, like, data exfiltration, things like that, right? I guess you can do it at the service mesh kind of level or the sidecar level, but you can also do it at the API gateway level. I mean, did you see that with the whole Log4Shell stuff, where you're deliberately patching and looking at payloads coming back through the gateways or see it through the sidecars?

Matt Turner: We saw folks who wanted to...it's difficult, right? I think there's a lot going on there is... Okay, so a naïve network doesn't have any kind of proxy, right? It's layer three, I give my...okay, let's say I've got one VM offering a service, one container offering a service. I give it a publicly routable IP. It's not on the internet, you know. Request coming in, request go out. Probably not great. So the first thing I do, I mean, zero trust is a whole other thing, but the first thing I do is isolate it in some way. So maybe it still has a publicly routable IP, and I've got a sidecar that's doing MTLS, and it's doing the zero trust thing. Or, more traditionally, I would put it in an isolated network. I would, you know, make those, I put it on a VPC, a subnet that's unroutable, and I put a proxy in the way, and that's the way to get it. So I think you do extra, you know, checks. Then they're kind of the same if you squint at them in the right way, right? So then I have that ingress, and then that ingress I upgrade to an API gateway by adding this kind of features, like, oh, I want to filter headers for the...or bodies for the JNDI whatever string. But I think another level of sophistication, yeah, is egress management. I mean, you see, you should go to any big enterprise, anybody that's regulated, you'll see there'll be DLP, right? Data loss prevention. What's that other than an appliance, a pizza box that you rack and stack, all egress traffic has to go through it. And it tries to look for documents to say, you know, company confidential or they do more than that, but, you know, you get the idea. So we do see folks doing this, you know, not going to that level, but doing this with Istio. You know, the obvious thing is that I just wanna firewall everything, right? So I wanna allowlist, you know, no egress is allowed.

Daniel Bryant: Oh, interesting. So lock down by default.

Matt Turner: So lock down by default, and then I want to allowlist. Oh, that's, you know, we use weather.com to provide whatever service, you know, we use...because a lot of stuff is now third-party online APIs, right? So we use weather.com and some mapping service and some postcode, you know, zip code to address.

Daniel Bryant: PayPal API, right?

Matt Turner: PayPal API is probably a better example. So you can allow list those one by one. And then with something like a service mesh, you can actually get more sophisticated and allowlist them service by service. So this service has identity talking. And the reason I would say a service mesh is because you've got...you can't really do that by source IP, but once you've got a strong identity for every service, you force...the way you do it with Istio is you would force traffic through...you set up an egress gateway. Force traffic through it. So if anyone in my business has bypassed their sidecar or break out in the CNI or something, you know, there's no egress from it. By security group, by really low-level construct, there's no egress from this network other than through the egress gateway. And then you can start allowlisting things like that and say, hey, the order service gets to talk to the PayPal API, the Stripe API, the stock prediction service talks to weather.com or whatever it is. Did we see anybody trying to block, like, egress for Log4Shell? Not that I can remember, but if you have that allowlisting, you know, denied by default approach, then I guess you'll kind of do it. I don't remember anybody trying to match, yeah.

Daniel Bryant: But I think I'm listening on the internet, people were sort of saying this is a quick filter for, you know, these kinds of things.

Recommended talk: Observability Engineering • Charity Majors, Liz Fong-Jones & George Miranda • GOTO 2022

Matt Turner: Absolutely. When I did a KubeCon talk with a good friend of mine, Francesco Beltramini from Control Plane, we did that in Amsterdam, we were talking about incidence response, security incident response, and how it varies in a cloud-native environment. And one of the things we said was you've got a lot more tech now. You've got a lot more options. So, actually, if you think you're...because you can't prevent every attack, and actually you often want to know quite a lot about if you are under attack. Especially if it's targeted at you, you wanna be able to learn quite a lot about it. So what we were saying is, if you think you're under attack or if you've got a part of a workload that you think is compromised, because, you know, Falco is saying it's making weird syscalls or something, then actually you may wanna let that attack continue, but you wanna watch the egress. So you wanna see where it's connecting to, you know, what DNS request it's doing, what IPs it's connecting to, can you find its C2 network, basically? And then what is going out? Does this look like IRC traffic? Does it look like some kind of C2 command and control thing? Does it look like your data being ex-filled, in which case you probably wanna put a stop to it pretty quickly, right?

You'd see it as an interesting observability point and interesting, as you say, a place to do defense in depth. It's difficult, like, if you ever get your home router and just deny all, you'd be, like, "It's a good security stance to have." And then I'll just open, I know the...I won't say the names because people's devices will go off, but I know this smart speaker is probably gonna talk to its services, and I'll open things one by one. And then if you've got some kind of way of doing identity, my phone, my laptop should be allowed to browse the internet, but maybe I carve out, if you've got a sophisticated home router, I carve out a separate subnet, a separate SSID for all the smart home things, all the bulbs you buy off eBay. And you say, "I'm going to block all." And they've all got web servers in them, right? And not only do I want to block ingress because they've all got web servers in, and that's a massive security hole, I wanna block egress, because what are they doing, why are they dialing home? They just don't need to. You'd be surprised how much breaks and how much you have to open to get, like, functionality back for these devices. It's kind of scary.

Daniel Bryant: I mean, that's like we were saying earlier on, like just in general, the plethora of technology gets emerged is amazing. You and I make our living out of it, right? But at the same time, the complexity has shot through the roof. I was sort of trying to hint at that in some of my talk today, and that's a great example in terms of...I think most of us are just to get home, we're just like, "Whatever, allow all." right? But in a business, you can't do that, right?

Matt Turner: Right. And that gave me a real appreciation for the sort of security compliance teams in here. I've worked in a couple of regulated places, and you just think, like, they really do have to allowlist everything.

Daniel Bryant: That's the fun, that's why you and I build stuff rather than do security, right?

Matt Turner: Well, it's interesting you think.

Daniel Bryant: It's a super important job, but I'm, like, oh, I'm driving nuts.

Platform Engineering and Self-Service

Matt Turner: Well, and that's where you need to get the dev experience, right, or whatever, and maybe we'll come on to that. But it's interesting you talk about platform engineering. Well, you talk about things getting super complicated and trying to answer that, and I think this is my point of view on the platform engineering, is you talked a lot about cognitive load, and the cognitive load on teams, app development teams just gets so high, and I think we saw it get high trying to write everything in C and having to write your own linked list, right? So we invented Java, and then it got high again when folks had to do their own deployment. You build it, you run it, but deployments, especially Canary deployments, progressive delivery, again, something we might come on to. That got super complicated, so we built them platforms, API gateways, and all these kinds of things, and I think now that cognitive load on just infrastructure, I just wanna operate this thing, debug it, it needs a bit of storage, it needs some DNS records, it needs traffic management, you know, east to west, and I need observability, that all gets so complicated, and I think that's where we're seeing platform engineering really coming into it. So if you subscribe to the team topologies model, right? You have a platform team that's enabling. You know, I've put a bunch of words into it in my own mouth. I mean, what's your opinion on platform engineering, on the self-service thing? What does you build it, you run it? Because we say you build it, you run it, and then we say, actually, you should have a platform engineering team. Like, how does that all fall out, in your opinion?

Daniel Bryant: Yeah, I think, like, someone asked me about the database of, like, platform engineering and DevOps and things, and I think a lot of us have been doing platform engineering before we had the word for it or phrase for it. Do you know what I mean? Like, you know, I've probably been building platforms, I used to rack and stack back in the day, and that was kind of platform engineering, right? But these days, there is many more components within a typical platform. Back in the day, it was rack and stack a server, you put a Java app server on it, maybe front it with a web server or something, and that's kind of happy days, right? And the attack vector, attack surface was pretty small because you kind of knew those bits of kit and whatever, whereas these days we're spinning up virtual environments, and, you know, the notion of self-service now is no longer that racking and stacking, it is calling APIs, calling SDKs. And I think the modern definition of platform engineering is around how we build the ultimate platform that developers, operators use to get their day job done, how we build that platform so they can self-serve on things like being able to spin up a new service, being able to deploy a service, as you mentioned, being able to monitor that service. So all the kind of machinery around that process for me is the sort of discipline of platform engineering. Do you know what I mean? And I think, and now, like, just with the sheer complexity out there, it is a really valuable discipline, particularly in big companies. Like, I think, you know, sort of some ops teams, some infrastructure teams are somewhat being rebranded as platform engineering teams, but I think they are also bringing in new skills. Like, you came up to me at the end of my talk, I'm, like, "Ah, Team Topologies." And I was like, "Oh, I didn't mention it explicitly, but totally right, Matt, right?"

Recommended talk: Platform Engineering on Kubernetes • Mauricio Salatino & Thomas Vitale • GOTO 2023

Matt Turner: See the thread of it, yeah.

Daniel Bryant: And a lot of folks I'm chatting to, and granted it is somewhat biased with, like, either customers or folks at conferences, but they have read that book. Like, "Team Topologies" blew the kind of doors off the way people build products, like, build platforms. So people are really now thinking about that cognitive load and almost chasing the holy grail of, like, the Heroku-like experience, the Cloud Foundry experience. That was kind of the pinnacle, right? Like, even more so than racking and stacking and putting an app server on something. Heroku for me was as close to we get, as close as we can get to perfection like Ruby on Rails apps in particular. You know what I mean? Standard form factor, 12-factor apps. They were sort of stateless to some degree as well. But, like, that, it was Nirvana for a while, right? I was loving it.

Matt Turner: I mean, the amount of Kubernetes teams, DevOps teams using Kubernetes that you just see trying to rebuild Heroku. All folks going on AWS or whatever just trying to rebuild Heroku. And that's not a bad thing. There's a reason for that. Is it, yeah, interesting you talk about standardization of all the apps? Looking the same from the outside. When it was Ruby, you had Rails, so you have the sort of standard. Everything was the same, could use the same framework.

Daniel Bryant: That's it.

Matt Turner: Everything kind of cracked. Everything was operationalized the same. That's not back grammar, but you know what I mean? Everything around the same because of the 12-factor thing. I think containers will let us do that across different languages, right? Because everything now looks the same from the outside. Everything runs the same. Everything fits into the same, you know, whole where we deploy it. But we still need the rest of the Heroku experience.

Daniel Bryant: That's it.

Matt Turner: Based on top of it. I think it's, yeah, it's interesting because we talk about self-serve. To me, I think there's two levels. There's two things that platform teams do. One is to chop off the layers, right? Because there's so many layers. Now, I'm talking literally, you know, power, physical security, data centers, all the way out through clusters and networking and stuff. And for a long time, you know, I've talked to CTOs who's like, "My job is to chop layers off the stack."

Daniel Bryant: Yeah, that's good magic.

Matt Turner: You know, so now when maybe AWS takes a lot of the, you know, deals with power bill and whatever, you manage cluster even, and then you have a platform team that abstracts more layers. But then the self-service to me is there's always gonna be one layer where we have to meet. Because you build it, you run it. It's like you run it at this layer of abstraction. So maybe we do give you keep CTL access and maybe you do write CIDs, or maybe you run it within the abstraction that we've built. So one of those layers is gonna be like, I think the app teams should only ever be interfacing with the top layer if they have to care about lower layers.

Daniel Bryant: Makes sense.

Understanding Mechanical Sympathy

Matt Turner: You've probably not built an abstraction, right? And there's always gonna be collaboration to me about that. I think that's what we mean by actually self-service, is maybe they can have that layer all to themselves, but realistically they probably can't. So there's gonna be two sets of folks trying to work there.

Daniel Bryant: I really like that kind of chopping the stack. Because I've seen lots of diagrams of, like, containers velocity that and then Amazon managed services. Something you just mentioned there, I think the notion of mechanical sympathy is really important, and it kind of nicely shows what you're saying in that, as an app developer, I actually just wanna put something in a container and run it, and I wanna be able to monitor things. But if I understand one level down mechanical sympathy, I know it's gonna be running in a container. I know it's gonna be ultimately on Kubernetes, and I sort of have an idea of the pod abstractions or maybe even how the VMs work and the networks work at a very basic level. I can build more effective systems. So I think that's why I've seen some struggles recently of, like...to your point, I've seen every abstraction leaks, right? Just the way it is, and people are like, "Hey, I'm deploying on Kubernetes here, even though the platform teams tried to hide it." They know it. Yeah, actually, that's just not a bad thing because, like, if you completely abstracted Kubernetes away, some of the things that they see happening would, like, not make as much sense as when they know it's actually Kubernetes. So it's a fine line, right?

Matt Turner: That's interesting. It would be so much work for the app team. It would be so much work for the app team to load the whole stack.

Daniel Bryant: Yes, exactly.

Matt Turner: Cognitive load can't be done. But it's also so much work for the platform team to hide all of those things to make a perfect Heroku. So where do... I think you have to choose which abstractions you expose, which ones you allow to leak. But I think the mechanical sympathy things are really interesting, actually. Maybe there's a layer in our diagram where the app team gets to play. Either they own it completely, ideally, or they interact by understanding the one below them. Because a lot of the talks...a lot of my talks that are successful, some folks do big ideas talks, right? The kind of things that I often do are just, like, explainers.

Daniel Bryant: At least you're the one who always stood out to me when I was just getting to Istio. I went to CodeNode in the UK, in London, and you're like, "This is our Istio. Is that how a packet goes for Istio?" And I was like, "I get it."

Matt Turner: Right. And should you need to know that as a user of Istio?

Daniel Bryant: Yeah, good point.

Matt Turner: And it's interesting. This is crystallized in my mind. You always get imposter syndrome, right? I certainly always get imposter syndrome. And I would stand up on stage and try to justify the talk, especially when I've been invited to the conference. Often that talk did fairly well. It became fairly famous. And I get invited to these conferences that weren't even infrastructure engineering conferences. They'd be like a software conference or something. It'd be a bunch of Java devs. And I would feel the need to explain myself and say why I'm here. Okay, well, you need to know this, because if it breaks, you're gonna have to debug it. So you're going to need to understand how it works. But actually, no, I think you're right. I think what we're teaching is...a lot of what I've found myself doing is teaching folks to lay it below.

Daniel Bryant: That's it.

Matt Turner: So that's really interesting. Even on the good days. So they understand it. They have a mental model for it. And it's not just the debugging. Like, you drive a car, right? You just drive a car. It breaks, you call someone. I'm sure there's a great analogy in here. You could just have breakdown cover. Yeah, anything goes wrong...

Daniel Bryant: That's the SRE team.

Matt Turner: That's the SRE team. Very expensive. Or you could learn how the car works. And A, you can have a go at fixing it yourself. B, it's that mechanical sympathy. I will drive it in a way such that maybe it won't break, totally. And if I hear those noise, then I know that, like, that's a bit of stress on this component. I'm gonna change the way I do things. That's a very interesting way.

Recommended talk: How Google SRE and Developers Work Together • Christof Leng • GOTO 2021

Daniel Bryant: I like that. So that's it. Martin Thompson, who spoke at GOTO a number of times, it sort of crystallized for me. He was doing a lot of talks around mechanical sympathy. And he goes properly down to the metal, right? He was like using Java in ways it shouldn't be used, it's fantastic, right? And as you and I kind of, like, what a 1% is, we always, like, to know everything pretty much. That's who we are. And that's why we go to conferences. But not everyone wants to know that. Not everyone needs to know that. And that's totally fine. I'm not trying to use 1% as like some elitist term. But I think you and I are always poking into this stuff. So when Martin Thompson starts saying, "Here's what mechanical sympathy was," I would never be building systems that he was gonna build. But I was fascinated in the way he'd use some of the unsafe Java stuff that made me understand the Java memory model better. And I could write Java code better knowing the memory model, right? And I was, like, and that's that one level down. The little cracks sort of show through in the abstraction. And I was never gonna use this stuff. I wouldn't trust myself, right? It was this stuff. But I was, like, "Wow. I get how the heap works better. Oh, and that's why you got to be careful with the stack." And that kind of stuff. So just, like, having, and say your analogy is perfect in terms of peeking down into the containers or with the car, knowing how to...when to change gear most perfectly, and then power bands, not revving to the red line every time, because you're gonna break your engine.

Matt Turner: Exactly. You've got it when you need it. And I always find those talks super interesting as well. I've never really done much Java. I was initially an embedded C programmer. You had to kind of know your architecture to get acceptable performance. And now I love Rust. Yeah. And you do see these talks, you're, like, hey, Rust, we're just gonna turn the safeties off. We can turn the traction control off. You can do that in a car as long as you actually know how to drive one to the next level. If it starts to slide, there's no system. It's only safe to do that if I know what to do. And I think it's the same. I can turn the safeties off in Rust. And if I actually understand my processor and its memory pipeline and its cache and validation and all of this stuff, it's safe to do that. And you always see every conference, there'll be some high-frequency trading house. We'll do a fascinating talk, right? We'll be like, oh, yeah, we turn next to our performance by just, you know. We know how the pipeline works. We just tell the thing not to flush it, because we know we're being safe, and that just is always fascinating.

Daniel Bryant: That's Formula One racing, right?

Matt Turner: Yeah, exactly.

Daniel Bryant: Most of us go into the shops, like, I'm driving my little, like, SEAT or whatever, and we don't need traction control.

Matt Turner: Fascinating to hear about. It is. But I think, yeah, every day, but we all do probably need to know at least how a container works.

Daniel Bryant: I think that's what I tried to, like, hint at today. And you and I have had many talks over the years and presentations around this, is I like to do, I don't go as deep as you, but in my talks, I do like to get people thinking, just even asking the right questions, right? Like, in terms of, like, the sidecar model was quite revolutionary for a lot of folks. But, like, I've been using similar patterns for quite some time, right? But you wouldn't, like, your talk was, like, you know, how the packet blows to Istio. People were like, "Sidecar, oh, I see. This is why I get certain security guarantees in terms of, like, I'm over a local network versus, like, you know, global network, whatever." And I think that's something I've seen a lot of good conference speakers do, is just make people think a little bit, right? In terms of, because I like to be pushed sometimes. Like, to your point about, you know, I've never been building Rust stuff, but I find that stuff fascinating. And often, a lot of what you learn is transferable somewhere else. Like, a lot of what I learned in C is transferable to Java, and Java, like, even to the ops world. Like, we talked today in the talk around coupling and cohesion. That's all up and down the stack, right? Coupling, high coupling can be infrastructure, can be at the application level, all these things. So I just love, like...my advice to folks that I knew is, like, learn as much as you can, right? And you'll see the patterns emerge.

Matt Turner: And the big patterns, I mean, learn as much in adjacent, but also just step right back. Every time you see communities, right, especially in the early days, I was, like, it's the actor model, I've done it. I've done it, this is the actor model. I mean, it's great, sure. But when we can now do actors with languages that don't have actor frameworks, right? Yeah, I think a lot of those analogies really work. You talked about developer experience. There was actually a really good audience question, I think, is that not just user? They were. Is that not just user experience? Like, is developers or users, and you were saying you should have a product mindset and a product manager and whatever. Do you think developer experience is a solved problem now? So, to me, there was a step change with Docker. Like, the technology is great, but one of the main reasons, I think, containers took off is Docker over LXC or whatever came before, it was just so easy to use. And we now have all these TUI frameworks, and you can get super shiny, colorful. I'm a terminal person, right? You can get terminal stuff. Also, to your point, it's not stupid if it works, but don't look down on people who use UIs and all of this stuff. Do you think enough effort has been put in that we now can't make things even easy to use, or are we still on that journey?

Daniel Bryant: Oh, great framing. I'd say we definitely are paying a lot more attention to developer experience than we were like 5, 10 years ago. I think the Docker example, I use that all the time, is a perfect example. Docker gave us good developer experience and a centralized hub to store stuff. That was game-changing at the time. I think a lot of the products we use day in, day out do have developer experience at the forefront. You know, you even argue some of the Kubernetes stuff like kubectl is pretty well-engineered, right?

Matt Turner: I was gonna say, that was one of my examples. Kubectl explain. You don't get that in...well, okay, actually, you do get it in Vin. That's not really a good analogy. But yeah.

Daniel Bryant: But some of that, I remember, like, and, again, I'm disappointed by the Amazon is mainly because I've used them the most. But I remember in the early days of the Amazon cloud CLI, I was lost, right? And every command, because, like, Amazon famously build all their technology sort of separately, right? And once you integrate some API, it's fine. But, like, I learned the command one way to do stuff, and then I'd go to a different part of AWS and be all different. Once I've learned kubectl, you know, get entity, get resource, you kind of get to go, right? So I think a lot of, like, the things we use day in, day out, it's been really well thought about. But the kind of pitch I made today is I think the tools I see people building internally do not have the care and attention applied. And I can speak from personal experience. Like, when I used to build tools, I was like, how do we find... I know it's a bit wonky around how you configure this stuff or I know you've got to spin up like a local binary and make sure you've got the latest JDK installed, whatever, right? Do you know what I mean? But then that, actually, like, contributes to bad developer experience. Because I know Java, I'm happy installing a JDK on my machine or a JRE, right? And then the Ruby person next to me is like, "I really want to install a JRE." And I'm, like, "Tough luck." But I'm, like, that's not a good developer experience, right? So I think a lot of the stuff, like, it's permeated the open source space. And CNCF, I think, have done a fantastic job with that to a lot of the degree there. It's Docker led the way, you know, other folks, like, that as well. But I don't think it's fully permeated the rest of, like, the world. Do you know what I mean?

Matt Turner: No, that's a good point. You step out of our cloud-native bubble, certainly. Again, we should probably be mindful of the audience here. Yeah, within cloud-native, I think Docker was the step change in our kubectl or something being super good. But yeah, you step outside of that. And I think, yeah, there were some technology changes as well. So the ability to package something as a container and go, and, of course, Rust. The ability to make a statically compiled binary. It sounds silly, right? Just that you don't have to install JVM. You don't have to use the PYVM or whatever. It's called RVM. None of that messing around. You can just use it. You can just get a statically compiled. You can obviously wrap Java with a JAR. I think you can get selfish.

Daniel Bryant: You can actually build, like, native images.

Recommended talk: Unleashing Native Imaging Power in GraalVM • Alina Yurenko & Bert Jan Schrijver • GOTO 2023

Matt Turner: You always could get self-executing JARs, WARs.

Daniel Bryant: That's extremely popular, right?

Matt Turner: But they were huge. Whereas Go, it'll be, if you don't have much business logic, it's five or six Meg.

Can Wasm Make a Difference?

Daniel Bryant: I know, I couldn't believe it when I started playing around with Go stuff. I'm, like, oh my, I love Spring Boot. I love Java ecosystem, but, like, 200 Meg minimal kind of thing. I mean, that's what it's going to be. It's tiny. Yeah, it's interesting, isn't it? I'd like to get your take, Matt. Have you played with Wasm very much? Coming back from the old-school Java days, the write-once run anywhere was a thing, right? And it's sort of, like, the joke was write once debug everywhere because it never quite worked.

Matt Turner: It leads to abstractions.

Daniel Bryant: Yeah, that's it. Where they weren't quite implemented, standard across the frameworks and across the platforms. But I bundled into Wasm a lot with the Envoy staff, with, like, extensions, things like that. But I saw a couple of great talks in QCon New York a couple of weeks ago, talking about Wasm being maybe the sort of ultimate format for building that binary. And then this great talk is from the cosmotic folks and the fermion folks. And they were talking a lot about web assembly components, how you can, like, say, reuse a Rust library in a Go app, something like that. And it just got me thinking, wow, to your point Go was a game changer, scratch container, Go binary, revolutionized the attack surface, the runtime requirements, many things, right? But I wonder, what's your take on Wasm? Do you think that is the next evolution of where we're going? Because we can compile many languages to a Wasm target, right? But is it...I'd love your take on that.

Matt Turner: It's a really good question. I really like, it. I wish it the best. I hope it works. I think this has always been a great idea. There's never been an implementation, as you say. It's been quite right. I'm gonna be that old man again. I've done this before. It's LLVM. Actually, there's a really great blog out there. I'd have to dig out a link for it. It would take a while. But there's a really great blog explaining, like, what LLVM is. It wasn't just trying to be another C compiler. Obviously, GCC had its issues. But it was trying to be so much more than that. And it had this whole idea of it. It transpiled, transliterated everything into this intermediate representation. And then of the front end, this abstract syntax tree, the front end, it transformed that to a bytecode, essentially. And then it would compile the bytecode to machine code. And that was the architecture of the compiler. And that let them do compilation of C and C++ better than GCC ever could because GCC just sort of amalgamated over time.

But they had all the tooling. You could take that bytecode. And you didn't have to compile it in the compile phase and then ship the machine code. You could ship the bytecode. It would then ahead of time compile on the time machine. So you could ship the bytecode anywhere. And you would just run it in a little runtime. But it wasn't really a runtime because it would ahead of time compile it. It would just do that. It was like, "I don't know where I'm going to deploy this. Here's your LLVM bytecode. Oh, it's this machine." Or even this is the whole Gen 2 Linux philosophy, right? Like, I know it's going to be an AMD 64 laptop. Oh, but it's this specific Intel model. So I can actually, like, turn on all the processor features and get the optimizations. This is because they were C folks. This was the lens, I think, looking at it with. But there was also, I don't know if it ever got out of experimental stage, but there was a just-in-time compiler as well. So anyway, LLVM bytecode tried to be this universal format for packaging software and shipping software and interoperability between languages because you get a bunch of front ends. Again, it works pretty well. I can have a Scala class, and a Kotlin class, and a Java class. And it does work. It does work pretty well. Like, Symbol, trying to call the Scala from the Java because of all the symbol mangling and stuff. But it sort of just about tried to work. So LLVM tried to do it for it. It just never took off. I really hope Wasm will be that thing with things like WasmTime and Wasmer. They're a lightweight runtime, so I can now essentially ./run. I can get a bit of Wasm bytecode as a command line binary, and I can kind of ./run it. I can run it server-side. So fermion is essentially...

Daniel Bryant: The paths for Wasm for a while, isn't it?

Matt Turner: Maybe I'm thinking that, yes, there's a particular tech. I think Docker can do it natively now. So instead of giving you an OCI container containing machine code, I can now just give you a Wasm bundle so I can run that as a container. I get the same. We still set up namespaces and cgroups, but I don't fork a process. I just run a Wasm runtime. So I can use it as a packaging format like that and I can also use Wasm to embed code in other code for extensions like you were talking about, gateway extensions.

Daniel Bryant: Totally.

Matt Turner: Envoy Gateway STO is totally extensible by Wasm. If you're in Golang, which is the cloud-native language of choice, you know, I can compile Golang to Wasm. It works. It's not perfect. But then I can write Go and I can embed it in a C++ program or anything else that hosts Wasm. And conversely, if I've got a Go program, I can use the wazero library by Tetrate. Open source, full disclosure. Well, more like a full disclosure, it's open source. And then I can write TypeScript or whatever, compile that to Wasm, and edit. So you've got that interoperability in every direction. Yeah, I mean, I always do a demo with STO showing how extensible it is. And a lot of folks do this. And one of my colleagues does it with a screen's worth of TypeScript, compile it, you add a feature to Envoy. I do exactly the same thing, but with a screen's worth of Rust. Compile it and it just works. So, yeah, it just seems to work well. It has the performance characteristics. Folks have thought about the developer experience and the packages and how we run it, how we operationalize it. And it ticks all the boxes for me, of proper server-side code, including running natively and running well in, you know, an orchestration environment, not technically even containerized of embedding in other processes and extending things like a Spring Boot module framework or like an ESP, but hopefully done better.

And I can also make command-line programs as well. So how long before we get... I think if we ever see a Wasm co-processor, their Wasm instructions because remember the...we're going completely off topic here, this is just a historical apocrypha, the Jazelle extensions to ARM. So ARM processors had, like, an AVX or an SSE, they have an extension, they have a set of instructions that was basically a hardware implementation of JVM. So they could run Java in an accelerated way. Like, basically saying, this is so ubiquitous, you know, what, we're gonna give you hardware acceleration for this. Interesting. Now, it wasn't all of them because the JVM bytecode is a very high-level thing. Like, it assumes that it's memory-managed. There's a one-op for, like, virtual dispatch, right? So very Wasm, it's like a CISC architecture. Wasm is very low-level. It doesn't model things like a virtual dispatch. But I think as a result, it probably makes it much more amenable to being implemented. I'm making this up, I don't actually...

Daniel Bryant: No, it's interesting.

Matt Turner: I think if we ever see that, that's probably a...

Daniel Bryant: We are seeing that drive called specialization, right? But even, like, ASICs, that kind of stuff, right?

Matt Turner: Yes, if I could get an FPGA. But, yes, sorry, very long-winded answer. But, yeah, I'm in favor of Wasm. I think we've seen the pattern before, but I think the dev-real experience and the operationalization of this is just coming together in a much better way.

Daniel Bryant: I suppose we are gonna be off-topic, but it's super interesting in that something... I almost forgot about the LLVM stuff, but I remember that we covered it a lot, InfoQ and QCon, there's a big for all, but I never saw a community developer around that. And that's one thing I'm seeing a little bit around Wasm, is the community, right? And it's even, like, you and I sort of of the cloud native here, right? And I think the community built around, like, the CNCF in particular, got to give them another shout-out. But, like, as we're building platforms, there is a community around these platforms. I wonder if, like, there's going to be sub-communities around Wasm. And if there is, is that the driver that's finally needed? Because I do think, to your point around dialing back to API Gateways, if we can write extensions modules in whatever language we're comfortable with and compile it down to Wasm, it will get rid of that kind of like, dodgy-like Lua script type stuff that I'm sure we've all read, right? Do you know what I mean? So I'm wondering, is the community, I guess, is my loaded question there, is the community the thing that makes or breaks this?

Matt Turner: I feel like it probably is. I think you need a critical mass. And if you have a critical mass, a community probably emerges. It's probably always going to emerge unless your critical mass is defense or research institutions or one of these folks where you see these texts and they don't... You do have communities around Slurm or something like HPC Orchestrator within, you know, defense and academia and CERN. I feel like it never really breaks out. But, yeah.

Daniel Bryant: It's a very tight community or a very well-funded community.

Matt Turner: Yeah, exactly. But with no incentives to, because they're well-funded maybe, but the incentives to go out and reach out to help from other folks. But I think we are seeing a community in so far as there's multiple compiler projects, there's multiple runtimes on the back end, there's at least one runtime. And there is now the CNCF, again, shout out to them, to sort of hopefully help bring all these folks to make conferences and host projects and provide test infrastructure. So I, yeah, I would hope so. Maybe that is the difference. I don't know why that LLVM stuff didn't get a critical mass.

Daniel Bryant: No, you definitely sparked the memory. I remember having many discussions with it, kind of exact time frame, but then disappeared, right? And then Wasm popped up in my radar, particularly around some of the Envoy Wasm stuff I was looking at at the time and I was like, "Oh." And the dream of being able to write any language to compile that extension, and I was like, "Yes, this sounds very interesting." And now I'm just seeing more of this community emerge, but I think to your point, time will tell, right?

Matt Turner: And that's why I do my examples in Rust, is because it's the anti-Lua. You probably wanna write TypeScript, or a Zig, or something like, realistically, but I'm gonna show you it can be done in like a hardcore language.

Recommended talk: Intro to the Zig Programming Language • Andrew Kelley • GOTO 2022

Daniel Bryant: Interesting.

Matt Turner: Because, and if you do run this in a tight loop, you are gonna get good performance because it's just so far removed from Lua, which I'm not a fan of.

Daniel Bryant: I've had to learn Lua several times for various extensions, but, like, yeah, not a fan.

Mastering API Architectures

Matt Turner: I should give you the opportunity to talk about your book, which is just...I have to confess I haven't read it.

Daniel Bryant: I'll get your copy, we're good.

Matt Turner: We'll do this again and I'll come better informed. Do you wanna tell me, by proxy, everybody else, what is the thesis of this book?

Daniel Bryant: Yeah, so we started actually just as the pandemic emerged. So it's been quite a therapeutic project for myself, James Gough, and Matt Auburn, like, the three of us came together. It's focused on mastering API architecture. That's the title, right? And actually it was born in New York, February, 2020. So just as the pandemic was unfortunately emerging, myself, Jim, and Matt Turner had been to O'Reilly software architecture conference and they were talking about API gateways, so was I. And we realized that something that we see in a lot of conferences that the knowledge that we thought was kind of, like, obvious was not obvious at all. It's like common sense, anything but common, right? Do you know what I mean? When we were saying, "Hey, you clearly need to, you know, think about extensibility of gateways or you think about security posture." And there's a lot of people saying, "This is great knowledge. Could you go a bit deeper?" We were like, "Oh, we kind of struck a bit of a seam sort of gold as we're chipping away there, right?"

We reached out to O'Reilly and pitched them and they said, "Oh, API architecture is a hot topic at the moment." So along with cloud-native and Kubernetes was, like, super popular and we were, like, "Oh, we can cover all those things." Right? So it turned, you know, the title is Mastering API Architecture, but it turned into a kind of cloud-native communication book, if you like. So we talked a lot about how you would migrate from, like, the traditional world into the cloud and using sort of the API as the lens of how to do that. Do you know what I mean? Because we see a lot of folks doing digital transformations and they're often, like, that word is super overloaded to be honest, but all those words are super overloaded, but oftentimes they're being API-driven. Like, their business is moving to offering some of their services about an API, that kind of thing. And they're often moving to the cloud and they're embracing DevOps principles. So, like, all those things that, you know, were mentioned there are often tied into how you manage your APIs, how you manage your cloud-native communications.

So the book literally covers, like, soup to nuts all the way through. We talk about sort of, like, some of the high-level concepts around designing APIs, thinking about REST versus Protobuf, or gRPC versus GraphQL, like, sort of mixing my metaphors there or mashing up the different levels, but we get people thinking about these are your options. These are the trade-offs you're gonna be making. And then we talk about testing. I wrote the chapters around the API gateway, naturally, right? And the service mesh, naturally. So, like, sort of getting people thinking about how to go around evaluating best technology solutions and choices. And then towards the end of the book, we talk a lot about migration. So migrating towards an API architecture or somewhat like a micro service. We tried to avoid that word quite a lot in the book because, like, Sam Newman has got that one covered. He's got a fantastic job, right? And we didn't wanna write Sam's book again because he's done, you know, two or three amazing books. So we sort of said it's service-oriented with an API, commonly as microservices these days. And then we talked about the migration towards that, migration towards cloud tech. And then we did try and get a couple of chapters in there around security because you and I are constantly talking about imports of security, right? And I would be honest and say as a developer, when I first started out, didn't think so much about security or performance, right? And then as my career progressed, I totally did. So we're trying to put that on people's radar. Think about the ilities, think about the security, the observability, the performance, that kind of stuff. And, yeah, good feedback so far. We launched it in, I think, November last year. So we've got the hard copies around then. And the journey has just begun. Like, once you put a book out there, then you get all the feedback. You're an idiot. You're brilliant.

Matt Turner: V2 will be perfect.

Daniel Bryant: That's it.

Matt Turner: Like every piece of software I write. And version two will be absolutely flawless.

Daniel Bryant: We're talking about, like, looking at that. So we're thinking of doing some, like, training courses and stuff because, obviously, the world is now opening back up. We didn't have... the first book I wrote, you know, I created a whole bunch of, like, other material with my buddy, Abraham. I wrote it with Abraham Marin-Perez. And we did sort of, like, we promoted it and shared more knowledge where the kind of the world opening up around the whole release of "Mastering API Architecture," has, like, meant it's been a bit more challenging getting the word out there. So, like, thank you. I appreciate you asking me about it because I'm keen to obviously get out there, right? No, I think it's a great topic.

Matt Turner: Well, I mean, of course, and I'm happy to, but, yeah, I think it's a great topic and I actually wanted to learn a little about it as well. I've given a talk, not a go-to. Why haven't I given a talk yet? I think I've just written it. Anyway, on essentially, because you say that you think this knowledge is common and I find it isn't. I joined a company once, like, really good actually. Everyone seems really good. Like, everybody was super high quality, but they knew their area. And I kind of turned up and I built them. I used the buff tooling, which I really, like. I won't put you on your spot. I know you're in this area. I won't put you on the spot for your opinion on anything because you still got to play in that sandbox. But I personally really, like, the buff tooling and I kind of built them this whole thing where, you know, folks would write, there was a repo. You would do interface-driven design. You know, like we used to in a good monolith, but you would write your, you know, contract-driven design or whatever you want to call it. You say, "I want to do this microservice." Okay, show me the API design. We use proto or open API, but you just had to give some kind of IDL spec for it. That would go past all the principal engineers or whatever we'd signed off. And then you check that in and then a whole lot of machinery would kick in mostly using the buff stuff. And you'd end up in a schema registry so that other folks could find it and say, "Oh, there's that service being offered." And these are the methods on it.

Here's how I call it. Some of them are REST, some of them are RPC. Again, we were agnostic to that, but, like, here's how I interact with it. And then the machinery would just go build you a bunch of libraries, right? A bunch of client libraries for it so that if you wanted to consume the service, you would just, whatever language you're in, you would, like, to go get, or you would pick install or, you know, whatever. The client stub for this service, and then you would have code that would...and that code was quite thick in the first instance because the company didn't have a service mesh. And the idea was we could just shave it off, make it thin. Nobody would know in the service mesh would they start doing the retries and the timeouts. And the other, the really nice thing that came off the backend that I hadn't heard of anybody doing before was when you bumped...when you wanted to make a break, so we did SemVer on the APIs, you wanted to make a break and change, you had to call your API V2, so your package would then get a major version number bump.

Daniel Bryant: Oh, interesting.

Matt Turner: And the consumers would have to take that major, like, Dependabot would PR it for them. Consumers would have to take that version bump. So we could then use the security apparatus we had in place, the dependency stuff, to say, "I wanna deprecate V1 in my API, who's using it?" And you don't have to sniff network traffic because we had to...

Daniel Bryant: Because you should see who's bringing it in.

Matt Turner: This is a financial institution, right? So you often, like, a lot of stuff would only run once a year for, like, end-of-year reporting. So you just...it wasn't safe to know whether version one was not being called until you've given it 12 months of literally sniffing all the packages. Does anybody still consume V1 of this client stuff package? Okay, I'll go talk to them. So I built all that, and I thought, again, I thought this was common knowledge. I thought, oh, they haven't done it yet because they just had other engineering priorities. And I kind of sat down with one of the engineering directors one day and explained this, and they were like, "That would be such a big lever to pull." So, you know, it's really...I love that kind of stuff. I think it's such a great developer experience. I think that is platform engineering, even though it's a great server, it's, like, it's dev enablement or whatever you wanna call it.

Daniel Bryant: I didn't have the dev enablement because I think people sort of mash the two topics together, developer experience, dev enablement. And again, like, to, like, to the point we made, people have been doing these things for time, but now we're putting sort of labels on it. I think what you said there, it's solid architecture principles, good team enablement, but people don't always take that bigger picture to say that, "My team is doing okay." Or whether my team is a operational team or a service team. They're, like, they're thinking locally. But I think what you said there Matt, it's like you've taken that global perspective, right? Consumer-driven contracts or contract-based testing. Like, you've taken that step back. I think that's something that, unless you've been there before, like, you know, or worked as a contractor or a consultant where you've pattern matched, you don't always see the need to do that. And that's why I think that, you know, is that contractor or consultant, everything. So that's obvious, common knowledge. But when you're in your own little world, and you've worked there for 10 years, you haven't poked your head up. It's been so busy. Like, that suddenly someone else coming in with these new ideas, like, "Oh, wow, that's revolutionary, right?"

Matt Turner: But it's super...again, we don't want to be, you know, say, sort of, you know, 1% is an elitist thing. How do folks keep up with this? How do folks get themselves armed with...because you can't know everything. I certainly don't know everything. How do folks get themselves armed with these good ideas?

Daniel Bryant: By the book, obviously, Matt.

Matt Turner: I was gonna say, yeah, by the book. Read the good books. Again, won't offer opinions, but, you know, there's certain publishers who, you know, have, like, a fairly high barrier for entry. Go to conferences or watch the videos.

Daniel Bryant: That's it.

Matt Turner: What would you add? I've done the easy ones. What would you... InfoQ.

Daniel Bryant: I think it is carving out time to learn, isn't it? And that's what...because initially I was...the second book, I was thinking, do we write a book? Do people read books these days? Because that's how I learned Java, was like, from a book, right? Back in the day, back 20 years ago. And now, like, some of the folks I'm mentoring, younger generation, they're like, I watch, you know, videos on YouTube or Pluralsight or, you know what I mean, take a pick. So people still read books, but a good chunk of folks, particularly what I call the gen young calls, the 99% folks, people getting stuff done, right? That to your point, they're almost too busy to go to conferences sometimes, right? So, like, by jumping on YouTube, like, GOTO a fantastic channel, as an example, right? Short-form content, long-form content, carving out like a lunch and learn once a week, or putting some, you know, some time aside to, like, to watch these things, read the books, because to your point, they're generally, you know, higher quality, sort of editorial content. Like, you just have to make time, right? And then, yeah, InfoQ is a good place you want sort of signposts jumping off points, The New Stacks is pretty good, some other sites out there as well sort of just keeps you common and keeps you, like, you keep seeing this word all the time consumer-driven contracts or Wasm you don't have to be an expert, but just have a little look at it. Like, oh, actually this is consumer-driven contract thing, like, we've been trying to do this, but we didn't know it was a thing.

Matt Turner: That's the name for it.

Daniel Bryant: Exactly.

Matt Turner: And now I know what to punch into Google to find the best...the state of the art.

Daniel Bryant: That's it.

Matt Turner: I think, and actually the ThoughtWorks Tech Radar takes a lot of that effort of that out of you. Have a look at the terms.

Daniel Bryant: That's a great shout-out.

Matt Turner: If you don't recognize one. I think it's about being T-shaped, right? So how do I go, how do I know a little bit about everything so I know whether that's maybe the right solution to look at, yeah, the conference talk videos, shorts, YouTube chat, I'll shout out my friend Dave, you know, Dave Flanagan, his YouTube channel, fantastic stuff, but there's loads of it. And then if something does become your day job, if you do need to dive in...I feel like we've fucked ourselves so much, right? I've done some LinkedIn learning courses, you know, Udemy and whatever, or a book, right? And then you can...I think you need to know a broad set of things.

Daniel Bryant: I agree.

Matt Turner: As you say. With time, you've also seen them all before and go, "Oh, Kubernetes, that's just Erlang." But to sort of know the patterns and to match them. And you can't do that without being a consultant and seeing a thousand businesses over 10 years. And then you can dive in, I think, through,

Daniel Bryant: That's it.

Matt Turner: Through going to a particular conference on a particular subject. You know, my first coupon was literally, we've heard of this Kubernetes thing, "Hey Matt, and work out whether this is the right thing to use at this company." And I just went to the first one.

Daniel Bryant: So many folks did that.

Matt Turner: It was so small, I just felt like a meetup. I ended up going up to sort of Tim Hock and he'd be like, "Excuse me, can you explain this part of the system so I know where I should use it?" And then the rest is history with my career because I was like, "Actually, I like this so much. I'm gonna leave my software engineering job, I want this to be my full-time gig, not my side gig. I'm gonna leave my software engineering job and come do this full-time." But for most folks, it's a tool. You understand it, you get it done, you move on, you keep doing it.

Daniel Bryant: But the opportunities come to pass sometimes, don't they?

Matt Turner: Sometimes, yeah. Sometimes the opportunities prevent themselves. I mean, this is an anecdote for me. I don't want folks to overfit to that kind of story. But, you know, as an example of I wanna dive into that, there are materials available.

Daniel Bryant: Fantastic.

Matt Turner: Cool, so I think we've been told to move on. I just heard a round of applause from the talk in the next room. They might try to put that up as well. But, now, that was fascinating, Daniel, thanks very much.

Daniel Bryant: Thanks for the chat, Matt. It's always good to chat to you. Now, we've got it on camera. We shared some of our knowledge as well.

Matt Turner: Exactly, yeah, I learned a bunch. I'm gonna go and re-watch this, honestly. Re-listen to all the things that, yeah, you said, because I was trying to think of what I was gonna say next. I'm gonna re-listen to all the stuff that you were trying to teach me. No, thank you very much. It was a pleasure.

Daniel Bryant: Appreciate it.