Continuous Delivery, Microservices & Serverless in 10 Minutes

Updated on May 25, 2021
Share on:
linkedin facebook
Copied!
8 min read

Continuous delivery has been around for more than 15 years, but it’s only gained wider adoption in recent years. In this Unscripted interview, Nicki Watt and Ken Mugrage chat with Preben Thoro about the evolution of CD and how it ties in with recent developments in the software architecture space. Find out when you should use CD along with its connection to microservices, serverless, machine learning and graph theory.

Listen to this episode on:

Apple Podcasts | Google Podcasts | Spotify | Overcast | Pocket Casts


Continuous delivery has been around for more than 15 years, but it’s only gained wider adoption in recent years. In this Unscripted interview, Nicki Watt and Ken Mugrage chat with Preben Thorø about the evolution of CD and how it ties in with recent developments in the software architecture space. Find out when you should use CD along with its connection to microservices, serverless, machine learning and graph theory.

Intro

Preben Thorø: My name is Preben Thorø. I'm sitting here at GOTO Copenhagen, and I managed to bring together Nicki Watt and Ken Mugrage. We're here to talk a little bit about software improvement. Before we kick off, I would just ask the two of you if you could just quickly introduce yourself.

Nicki Watt: Sure. So my name is Nicki Watt. I'm the CTO at a company called OpenCredo. We help companies to adapt and adopt emerging technologies to solve their business problem, working quite a bit in a distributed system space. That's me.

Ken Mugrage: I'm Ken Mugrage. I'm a principal technologist with a company called ThoughtWorks, mostly working in continuous delivery and stuff for about the past decade.

Nicki Watt, Ken Mugrage & Preben Thorø interview

The evolution of Continuous Delivery

Preben Thorø: So continuous delivery, I heard about that, first time, 15 years ago. I have a feeling it's taking off now and not until now. Why is that?

Ken Mugrage: A lot of it is the availability of infrastructure. Although there was a white paper called "The Software Production Line," it was from Agile 2006, we still were mostly deploying onto physical hardware or very expensive data centers and such. It wasn't as easy to get the end to end. There were still a lot of silos in the organizations. We own the hardware, and you can't have it, you have to fill out a report that says why you need a virtual machine. Then we'll establish a virtual machine, and now you continuously deploy to it. Whereas today, and for several years now, it's a lot easier to provision that hardware in the sky. I fly a lot and I don't see things in the cloud, but they say it's the cloud. It's just easier to get your hands on the equipment to do it is a big part of it.

Recommended talk: GOTO 2017 • It’s Not Continuous Delivery If You Can’t Deploy Right Now • Ken Mugrage

Preben Thorø: So it needed a change in the organization and not in the technique.

Ken Mugrage: Oh, yes.

When should one consider continuous delivery?

Preben Thorø: That's interesting. Still, to apply this to a project, there must be some kind of overhead. It's work that you need to do. So how big should the project be in order for this to make sense?

Ken Mugrage: Yes, I actually contend that there is no low bar that as soon as you're doing it. Because your continuous integration and continuous delivery are practices. They're not technologies that you add. And so if you're creating new software and you start writing tests at the very beginning, that's just a good thing because now you have tests. 

If those tests are running and you know you're going to be deploying to a public cloud, for example, then you should be deploying to that public cloud as often as possible to make sure that you didn't break that process. So even with the very smallest applications, I do continuous delivery on my website. When I update a blog entry, it goes to a pipeline that deploys it.

How does Graph Theory fit into Continuous Delivery?

Preben Thorø: Nicki, you're working a lot with graph theory, graph databases. How does that fit into continuous delivery?

Recommended talk: GOTO 2020 • Discover the Power of Graph Databases • Jim Webber & Nicki Watt

Nicki Watt: One of the areas that we're looking at at the moment is how you can use graph theory in order to get some interesting insight into your microservice architecture. So we've worked on a few business-related projects, things like social networks and things like infrastructure networks. But you can actually take some of that insight and apply that to microservice architectures. 

One of the things we've done is looking to maybe hook into some of the observability tools, things like Zipkin and stuff, and get some of that information out as to which microservices exist within the system, who they're calling. Then applying things like community detection algorithms and the like of that to actually work out what your architecture might give you some insight into that, which can help you to improve what you have.

Microservices & continuous delivery

Preben Thorø: We keep referring to microservices and microservice architecture. Have we been waiting for microservices to be the new thing before this made sense?

Nicki Watt: I think, in general, microservices is a complicated thing. Is very complicated and it's getting more complicated as people actually start building more services. People are struggling to actually understand how these things fit together, where some of the bottlenecks are, and things like that. 

Recommended talk: GOTO 2019 • Explore your Microservices Architecture with Graph Theory & Network Science • Nicki Watt

I think the tooling and different techniques are coming out more and more to try and help people do that. And I think this is one area explore. I think it was probably possible before, but the problem is more obvious now that there are so many more people doing it that there are different ways that we need to look at trying to get more insight.

Preben Thorø: So continuous delivery didn't wait for microservices?

Ken Mugrage: Not really. One of the things I was talking about today is that when we first started doing continuous delivery, it was still large monolith systems. I don't want to make it that monolith bad, microservice good either, by the way. If you're starting that greenfield brand new project that we talked about a moment ago, to start it in a microservice type architecture where you don't know the business context yet probably isn't the right thing to do. Probably is right to create a monolith for that first, then figure out where your business lines are, and then that defines your services and so forth.

So we've been doing continuous delivery of monoliths for quite some time. There are projects of a certain size. If the system itself takes an hour-and-a-half to deploy, then that stage of your pipeline takes an hour-and-a-half to run, and there's no way around that. 

So getting fast feedback about did this break our deployment and so forth became much more challenging. Also, just organization, again, comes back to organizational structure, people trying to get smaller pieces of ownership, it becomes harder. There are certainly ways to do it. Microservices are not the only way. There are also component-based architectures and others. Breaking it down makes it more complicated but it doesn't have to be if that makes sense.

What comes after microservices & continuous delivery?

Preben Thorø: Yes, exactly. I have a feeling that microservices are just, in many ways, complicated. Things like what we're talking about here now. If that is true, then I guess we're on a journey where microservices is just one stop on a journey. So what comes after microservices?

Nicki Watt: That's a big question. I think at the moment there's a serverless versus microservices camp or a fight, not fight but a difference of opinion. I actually don't see it that way. I don't necessarily see it as either microservices or serverless. I do think that moving towards an architecture where you use more managed services is the way to go. But I think they're also different architectures, serverless architectures, truly service architectures, FERS-based and microservices, and they offer different ways to do things. So in general, there is a move towards a more serverless and managed service style of building components that way. 

I don't think microservices are completely going to die. However, I think what people actually perceive a microservice to be is going to change. So people have different ideas on the size and all these types of things. As more of the problems get surfaced, I think it'll merge back into slightly bigger entities but more sensible entities and a combination of serverless architectures as well.

Recommended talk: GOTO 2019 • Journeys To Cloud Native Architecture: Sun, Sea & Emergencies • Nicki Watt

Preben Thorø: So what comes after continuous delivery though?

Ken Mugrage: It’s funny because continuous delivery started "easy." You had a single thing that was going through a pipeline, and you might want to do some tests in parallel, but for the most part, it was a pretty straight path. Then we started adding more distributed-type systems where we didn't really know which components were talking to each other. So the pipelines actually got a lot more complex. 

What kind of testing are we are going to do to make sure that we're hitting a service API that's not even available during our testing? They got really, really complex. What's interesting, though, is as things solidify to where we really are understanding now that microservices, but whether it be... I heard an argument the other day that serverless is a microservice. It's just a different implementation. It's still a small thing that does one thing, and I don't know.

But as these things get smaller and more self-contained, continuous delivery is actually getting easier. So we see the rise of software as a service for CI and CD where now we're back to simple linear pipelines because our small little piece of code that we're going to put on to serverless can be deployed in minutes or seconds, depending on how much testing you're doing. 

The pipelines have actually gotten simpler, instead of the other way around. I'd see that continuing, actually. What I hope to see next though is the pipelines learning from the work that Nicki was talking about. So learning once it gets to production and we have true observability and we're learning what we're doing, "Hey, we found that we're doing something here that we really should be doing more compliance checks in the pipeline or different checks." Or, "Hey, we don't need to do those checks but we do need to..." etc. So have the pipeline be considered part of the system and have it evolve as we learn more about the system as a whole.

Preben Thorø: Will machine learning be part of that?

Recommended talk: GOTO 2019 • Modern Continuous Delivery • Ken Mugrage

Ken Mugrage: I think it could be.

Nicki Watt: Yes, certainly could be. I think in terms of the talk that I gave on graph theory and using that to analyze your architecture, there's certainly elements of machine learning that would be able to do some interesting things in some of the areas like grouping services and understanding based on patterns of how services evolve over time, what might be interesting, sort of, areas to look at. So in time, I think, certainly probably.

Preben Thorø: I'm looking forward to chapter two of this in one year. Thanks a lot for joining us today.

Ken Mugrage: Thank you.

Nicki Watt: Thanks very much. Thank you.


Related

CONTENT

Temporal Modelling
Temporal Modelling
GOTO Amsterdam 2019
Farley's Laws
Farley's Laws
GOTO Chicago 2018
Pragmatic Microservices for Organisational Scalability
Pragmatic Microservices for Organisational Scalability
GOTO Amsterdam 2017