What should the modern software engineer know in order to be the best at their job? Dave Farley and Steve Smith explore the books that can help engineers succeed and why iteration and experiments are crucial. The discussion is centered around Dave’s latest book “Modern Software Engineering.”
This episode is sponsored by Harness
Harness is the only end-to-end platform for complete software delivery. It provides a simple, safe and secure way for engineering and DevOps teams to release applications into production. Try Harness for free.
Steve Smith: Hi, I'm Steve Smith and I'm going to interview Dave Farley for The GOTO Book Club, about his book, "The Fundamentals of Software Engineering." I was fortunate to work with Dave between 2000 and 2010. Dave won't like me saying this, but it was really kind of transformative for me, not just learning how software engineering works, but also how to manage people, how to be a parent, all kinds of things, and I'm always really grateful to Dave for what he's done to help me in my career. So, when the GOTO folks said we'd like to talk to Dave about his book, I kind of jumped at the chance.
We'll get into why I think it's such an important book later on, but for the moment, Dave, do you want to say who you are, in case someone's been under a rock and doesn't know who you are?
Dave Farley: Hi, everyone. My name is Dave Farley. I'm a software developer and consultant. As Steve said, we worked together, building... on a fantastic project, building one of the world's highest performance financial exchanges once, which was a lot of fun. I also run a YouTube channel, which is surprisingly, to me at least, successful, and I write books. The book is "Modern Software Engineering." Sorry to correct you, Steve, you misspoke the title. It's "Modern Software Engineering."
Steve Smith: Oh, I'm sorry. Not off to a very good start. So, you actually still call yourself, "I'm a software developer," heh? Because I stopped calling myself that a while ago. I felt like a fraud.
Dave Farley: Well, I might be a fraud, but I do enough of it. I was writing code yesterday, so I think it counts.
Software engineering books that shook our foundation
Steve Smith: I can't remember... I'm waiting for my kids to get into coding, and then I think I can call myself a developer again. The thing that really strikes me about the book, ever since I saw the early drafts of it, as we don't get books like this anymore. I remember when I started out at the early noughties, by the time, you know, we first met in 2000 and started at LMAX, then called Tradefair, together, there were loads of books like this around. There was Kent Beck's XP books which were getting on a little bit by 2008 but were still really popular, there was the Michael Feathers books, the Alistair Cockburn books, there were the Mary Poppendieck books, there were loads that were looking really high level at what the nature of engineering is, what the nature of collaboration is. I actually met Alistair Cockburn I think for the first time a year and a half ago, and the first thing I said was thanks very much for the tiger book, and he was delighted somebody remembered it. He just went, "I'm amazed somebody remembers that," and I was like, but that was such a big deal. That kind of steered my thinking so much about how people work together.
I think this book is the same, it will steer people's direction, and how they approach engineering. So, it just feels strange, there's this book, and there's the Ron Jeffries Nature book in the last five years, ten years... I mean, the Ron book was 2014. But do you have any idea... what are your thoughts on why these books aren't as common as they used to be?
Dave Farley: I don't know the answer to why they're not as common. I would agree with you that I don't see them as being as common. And I think they're important. It's an important style of book, I think. I recently did... sorry, I'm advertising my YouTube channel again. But I recently did a video on my YouTube channel about books before Christmas, and I listed my top five and talked about a bunch of other books. But one of the things that I noticed, as I was kind of going through and looking at this, was all of the books that I was recommending were technology-specific. I think that technology-specific books have a relatively short utility. I think that they have a short time horizon. If I'm honest, they're not the things that interest me very much, you know, how to actually call this thing, this API, or you know, use this particular framework, or whatever else. It's the sort of stuff I'm going to learn online, rather than buy a book for, on the whole, these days, or you know, play with the tech. And I think books are about ideas, and so, I think they're the kinds of books that appeal to me.
One of my favorite books, my top one in my list was "Domain-Driven Design," which is a book that talks about design in the abstract, and doesn't talk about that in the context of a particular technology. I think that's one of the missteps that we as an industry has taken in misunderstanding what software development is about, is to focus too highly on the tools. Carpenters, yes, they can wield their tools with skill, but their job isn't the wielding of tools, their job is the production of chairs or doors or windows or whatever else it is. Our job is not programming, or programming tools, or programming frameworks, I am not a .NET programmer, I am not a Java programmer, I am not a C++ programmer, I'm a software developer, and my job is to achieve some outcome with software, and you choose the tool to meet the problem, rather than choose the problem to meet the tool. And so, the books that, to me, go deeper are the books that talk about that part of the problem, the problem solving, how we organize ourselves, how we structure our work in ways that are going to allow us to do a better job into the future. And those are the things that interest me, and certainly, that's what I tried to talk about in my book.
Steve Smith: See, it's funny, you mentioned The Blue Book by Eric Evans because I remember reading that and loving it. And I remember at LMAX, a few people failed and we were given a copy, and said, you really need to understand this book. And yet, I still see so many companies where there isn't domain-driven design, not just in technology, also in how they organize teams. It's almost like that book needs a new version. Maybe we should just phone Eric and say give it another try.
Dave Farley: Yes.
Steve Smith: Actually, I might do that. I know that when I wrote my last book around “Measuring continuous delivery," I chose an abstract idea and went very narrow, much narrower than you have. But a few of the reviewers and some readers came back and said, can you tell us how to implement this? Like, this isn't technical enough, almost. One person actually wrote to me and said, "Can you just tell me about the product I need to do this," and I was like, there is no... there is no product.
Dave Farley: Yes.
Modern software engineering in practice
Steve Smith: I know, but I had to add a bit at the end to actually put in some buzzwords because people were moaning. So, have you had any feedback from people saying enough fluff, how does this actually work for me when I'm trying to sort out my NPM dependency horror?
Dave Farley: Not quite, no. So, I haven't really had feedback like that. The only negative feedback that I've had so far in reviews and stuff is that it's a bit iterative. It cycles around because all of the ideas that are at the heart of the book are deeply interlinked, so it's difficult... So, my book divides the problem up into two primary pieces. I think that to do a great job of software engineering, we need to become... we as a profession need to become experts at learning, and so we need the tools of learning at our fingertips. And we also need to be experts at managing the complexities of the systems that we build because the systems that we build these days are too big to hold in their entirety in our heads. We need to be able to be good at both of those things. And the trouble is that those tools are pretty fundamental. They're things like iteration, incrementalism, feedback, company modularity, cohesion, coupling, and those sorts of things, and these are all deeply interlinked ideas. So, the book kind of cycles around, and talks about the ways in which these interact with one another, and the dimensions of how we can apply those both to the organizational structures that we create to allow us to develop software, but principally around the systems that we build to implement the features. But I think that's the thing.
But it is a book of ideas rather than technology. You're not going to be, at the end of this, you're not going to be able to use React better than you could before. Well actually, I think you could. I think you would be able to use any technology better after you read this book if you understand what it is that I'm talking about. I use the principles in these books to influence the way that I write CSS, you know? I mean, this is pervasive across nearly everything that I do in terms of managing information, I think.
Steve Smith: Well, I use these when I'm writing, like, a marketing guide.
Dave Farley: Yes
Steve Smith: Or when I'm writing a book or a blog or whatever, or when I'm just, I don't know, with home stuff? Optimize for learning and managing complexity, those are two pretty effective goals, I think. The optimize for learning thing, I've spoken to so many companies about it so many times, and it's really hard because what you often see is, in the last two years less so, but a company will go and buy some books, including this one, probably, put them on a shelf in the office, and go there you go, go learn, you know? And then there's no kind of time invested in bringing people together. The classic one that John Osborn talks about is after there's a production incident in a company, people should get together and talk about, and ask questions and learn from it.
Dave Farley: Yes.
Steve Smith: That doesn't happen. It gets written in a document, and nobody reads the document, and no one learns anything from what could be... you know, could have a huge amount of information to decode, if you put the time in. But also just working iteratively, it's one of those things where if you don't do it, it's hard to understand, and if you do it, it's hard to understand not doing it anymore. So, how do you find explaining something that you've done... that I know you've done for so many years? Because I find it quite hard to say to people have you tried breathing, you know? Working iteratively, to me it's so natural, it's hard for me to explain things that are so natural.
Dave Farley: I think there are lots of different dimensions to all of these ideas, but if you think about iteration, then what iteration, the way in which we describe iteration, the different definitions of iteration say about repeating a set of steps to steer you towards some specific goal. I think there are two aspects to iteration. So, if you want to get to some kind of target, some goal, whatever that might be, it could be a commercial goal in terms of the value that your software brings, it could be the number of users that you have, or it could be some kind of technical goal, you want it to be this fast, or this easy to work on, whatever it is, you're not going to achieve any of those sorts of things in one great step unless you're incredibly lucky, you know? That's just ridiculous, really, to imagine that. So you're going to carry out a series of steps, you're going to navigate towards it.
To be able to use iteration, you want to get really, really good at those repeating parts. In software, we're using tests to get feedback so that we can try stuff, and we can change stuff quickly and efficiently, and use continuous delivery deployment pipelines to get that feedback so that we can understand where we are at any point, those sorts of things.
But then what you need is you need a fitness function. How do I measure whether I'm closer or further from my goal? If you have those two things, if you have the ability to iterate, and a fitness function that tells you whether after each small step you're closer or further from your goal, you can hit the target. Because even if you just start off in a random direction, you don't have to have perfect knowledge, you don't have to understand how you get to the destination. You try something out, you say does that get me closer or further away from my goal, and you discard the steps that get you further away, and you keep the steps that get you closer, and ultimately, you'll end up, you know, at the destination. That's how evolution works. That's how science works. So, there are some deep reasons why these sorts of things matter. You can't discard the idea of iteration, and I don't think that you can achieve anything complex in its absence.
The other thing that I would say sort of the counter that in terms of, you know, what's the opposite, what's the alternative to iteration, so the alternative is kind of what we tried for way too long, and not being able to... still not being able to successfully discard, which is kind of a Waterfall approach.
A Waterfall approach kind of starts off with the assumption that you can understand everything in sufficient detail to form a plan to get you to a destination, to a target. That means that at some level, it puts a limit on the complexity of the system that you can build. Because you've got to understand the insufficient detail at the start to understand whether it's going to get to you to the target. An iterative approach doesn't give you that constraint. Elon Musk is currently building spaceships in Texas to take us to Mars, and they don't know how to do that. They still don't know how to do all of those things, the things that they need to do to achieve that. He starts off with the vaguest of goals, and after each change, he says does this get me closer to being able to get people to live on Mars or not? Yes, fundamentally.
Steve Smith: I was thinking about what you said about Waterfall there. I think it's fun to dump on Waterfall, but I guess the opposite of iteration, for me, it's really like that foolish pursuit of perfection.
Dave Farley: Yes.
Steve Smith: It's that idiotic idea that we can be finished? I saw, again, the bridge analogy today in a government report that said, you know, we're not building bridges, we're building technology services, and I find it hard to believe in 2021 that I'm... 2022, whoops. I find it hard to believe that we're still having that ridiculous comparison. The idea that software is ever feature-complete is nonsense. Either you're trying to learn, or... and eventually, you might retire that service, and then you can truly consider it done, or you're dying as a business because you aren't finding out what your users want. You're just assuming, you know, we can be feature-complete, and now I'll hand it over to ops forever to run it into the ground because they've got no time or money for it to actually do anything with it.
Dave Farley: Yes.
Steve Smith: But one thing I see that's really painful for delivery teams when they are asked to hand something over to another team so they can work on something else, is they've been iterating on site for so long, and .they're guessing, this is where the iteration ends, this is where the learning ends. We're assuming there's no more learning to be had. We've built it perfectly, it's all defect-free, and of course, it's not what... and it's what users want. We haven't asked them, but I'm sure they'll like it.
Dave Farley: Yes. That's part of the need for iteration, you need to iterate on your product direction and so as well. Part of the reason why technical practices like continuous delivery and deployment pipelines and those sorts of things are so valuable is because of what it gives to the business. It gives them an ability to be a continuous delivery business, and so experiment with business ideas and product ideas. That's the real value that we are bringing. If we can work so our software is always in a releasable state, that allows us to make a change, and change direction and say, oh, that was a bad idea, we're going to stop doing that, or this looks like... this unexpectedly looks like a really great idea, so let's do more of that. We can observe those kinds of things, we can make those kinds of learnings. We can take those learning opportunities, and structure ourselves to organize our work to achieve them. Which is kind of what I mean by working experimentally. There are a number of different aspects of working experimentally. If you want to work experimentally, we've got to say what our hypothesis is, we've got to make a prediction from our hypothesis, we got to figure out what feedback to gather, you know, to measure, to understand. We've got to control the variables so that we can understand the results of our experiments.
So, we can apply those sorts of ideas to product design, as well as to the software design that we're trying to achieve, and we can learn from those things, too. And so, you know, this becomes... these sorts of ideas I think become deeply ingrained, but at their roots, they are, I think, deeply profound in terms of their meaning. The stuff I was saying about in terms of starting out without really knowing the answer or the destination, as I said, is really the difference. It seems to me, without sounding too grandiose, kind of the fire that lit the Enlightenment. It was the idea of stopping relying on authority and expertise, or handing down rules from on high, to starting out by assuming that you don't know the answer to something, and being suspicious of your own ideas, and testing them and validating, and learning from them.
Steve Smith: Well, I'm up for half an hour talk about the Enlightenment. That'd be great. I think it really comes back, though, I think it comes back to, like, being humble, and having learning, and thinking actually, maybe we don't know everything. Maybe the person telling me all this stuff doesn't know everything. I mean, when you talk about having a hypothesis before making a change, it's really profound. For so many people, it's just I'm going to blaze in, I know what to do...
You really do have that career thing where you start out and you're like I know nothing. Then halfway through your career as a software developer, at least to me, I was like, I know everything, and then by the time I went to LMAX, I was like, I know absolutely nothing. Then when I left LMAX, I was like, okay, I know some stuff now, and what I don't know, I know how to go and find out.
Dave Farley: Yes.
Steve Smith: That's the thing that when I'm interviewing people, for example, if they don't say, "I don't know," at any point, I get a bit worried. If they say, "I don't know, and I know how to find out," then I'm really excited, and I think, oh, that's exactly what I wanted to hear.
Dave Farley: Yes.
The experimental software engineer
Steve Smith: I know you talk a lot about TDD as an example of the scientific method, and you know, obviously, it's really effective, it works really well. One thing that I don't think has ever taken off as much, which is a real shame, is acceptance of test-driven development, the idea of a developer or a business analyst sitting down, writing a functional test upfront that fails because you haven't built the thing yet, and then it is ignored or something. Then for two weeks, that test is red or ignored, and then once you've finished a feature, it goes green. That was such a powerful tool we used at LMAX, so I used it at other companies afterward, like Sky, and HMRC, I think we used it there. It just amazes me that that never quite took off. I think BDD and SB, whatever? But the idea of I'm going to start out this new feature work for the next week or so by writing down the change it will have for our business model, the new functionality will be delivered, and then I'm going to gradually break it down into TDD, or work on it iteratively and keep building it, keep releasing it, it's just a shame that that never... never took off in the way TDD did. I don't know why it didn't, really. It's a shame.
Dave Farley: Yes. I think this fits into this kind of engineering practice. If you want to work experimentally, then you want to be able to, as we've said, form a hypothesis. What you want to be able to do is that you want to build your model for what it is that you think, that you believe, so you're going to have some kind of idea of the next thing that you want to do, whatever that might be. The next step that you want to take in your software or your product. You're going to come up with an experiment. You're going to make a prediction of if I make this change, then I will see these results. If I'm writing software that allows me to buy books on Amazon, then I should be able to find a book, put it in my shopping cart, go to the checkout, and pay for the book. So, I could write a specification in those terms that define the desired outcome that I want to achieve, and then I can do the work underneath until that specification is met, that outcome is achieved, and we've got our results. We've passed, our experiment has been successful, and we've got acceptance from test-driven development in the way that you just described. It's one of the things that land most profoundly with my client base too, in terms of coaching people to do a better job. I've been doing this in complex environments for a while now, and it just works. It just works more effectively than anything else.
I think if you get to the philosophy of what's going on here, the reason why it works is because what we're doing is that we've just been a little bit more cautious, and we're thinking about this as an experiment. Each change to our product, each change to our technology, each change to our organization or our culture we're going to carry out as a little experiment, and we're going to think, well, what does this mean? Where would we like to get to, what's the step that we'd like to take? How would we understand that step, how would we control the variables to see whether we've made that step or not, and so on?
Steve Smith: Yes. No, that makes sense to me. I think, also, the size of a feedback loop that you can create is really key. Tthen with TDD and with ATDD, you can really drive down that feedback loop. With TDD you can find in seconds, ATDD, maybe a minute or less. But there are some cases where you can't achieve, you know, as small a feedback loop as you would like.
Let’s take the GOTO Book Club as an example. I love getting to go to conferences and speaking there, but I know from experience running conferences myself that you only get feedback once a year, so it's really hard to figure out what people actually want. I'd be interested in what ways of using do we try to get around feedback loops that can't be smaller.
It's a bit like chaos days, or another attempt to drive down that feedback loop between incidents, right, you just simulate an incident to speed up that feedback. Simulating an entire conference seems like a bit much to do...
One thing I do, actually, with that large annual feedback loop myself is I cheat. I only listen to feedback from people who actually know conferences well enough. They understand that constraint of a year, they understand you can only do so much in a year. Like, what techniques have you used to try to drive down feedback when there's a hard constraint preventing you from having feedback in seconds or minutes?
Dave Farley: I think this is one of the many places where creativity matters. One of the conversations that I get a bit frustrated with sometimes when talking about these sorts of ideas is I start talking about engineering, and people start to imagine bureaucracy. You were talking about bridge-building earlier on, people imagine that bridge-building is some kind of cookie-cutter, Gantt chart-driven, you know, bureaucratic process. It couldn't be further from the truth. Particularly if you're building the first-ever kind of a bridge, it's going to be interative and experimental, and it's going to be feedback, and you're going to be learning while you're doing that because I would argue quite the converse, that one of the most creative acts that human beings undertake is engineering. If you think about engineering in the context, I don't know, of building the Curiosity rover that landed on Mars, or Elon Musk with his spaceship, or Tesla with their cars, or Amazon, you know, building, you know, the public cloud, these are creative feats that are pretty much unprecedented.
They work within boundaries and constraints of the technical practicalities of physics or computer science to make these things work, but there's certainly an active creation. So, if you want to be able to, you know, approach these things from a creative point of view, then you want that fast feedback.
If I value the speed of feedback highly enough, what am I prepared to do to get it? What am I prepared to sacrifice to get it? And that's part of what I'm talking about in my book, is that I think we should, we should be valuing these ideas so highly, that they drive our decision making. So as you say, feedback is vitally important. So, let's imagine the difficult circumstances. One of the areas where I'm doing a lot of work at the moment is with organizations that are building medical devices. And organizations that are building medical devices are constrained by a regulatory regime. If they're the kind of devices that can kill people, they're constrained by a regulatory regime that doesn't allow them to release the change unless it's been evaluated by an external third party for six months. So, that doesn't sound very conducive to continuous delivery. So, how do you cope with that? Well, continuous delivery is actually not about releasing often, it's about being releasable all of the time. So, you can optimize to be released so your software is releasable all the time, but then you cheat, and you find ways in which you're able to release that... So, you might be able to release into a non-clinical setting, into a university that's training people, you know, on dummies or something like that, where you can use... you can try out these ideas in those sorts of settings.
In the example of the conference that you were talking about, you try and find different ways of, you know, how could you fake the fact of another whole conference, but a part of it? So, you could maybe gather feedback on individual talks or speakers, you could get different kinds of feedback at different levels of granularity. If you're building a car, you know, or a truck or something, then you... or any, you know, hardware-driven device, you probably want to do a lot of testing and simulation because that's going to give you much more opportunity to cycle around, iterate more quickly, and get feedback more quickly. One of the innovations at the heart of the first Mac, when Apple was building the first Mac, was that they adopted the use... the first large-scale commercial use of application-specific integrated circuits, ASICs. And they did that as a conscious design decision because it meant that they could iterate faster on the firmware for the first Apple Mac. So, these things are important to engineering, and then it's just a matter of creativity about the experiments that you can come up with to be able to get that fast feedback, to iterate more quickly. And then, you know, you start being innovative about the way in which you kind of, you know, deal with the realities of the circumstance that you're in.
Steve Smith: I was just thinking now about years ago, I spoke to somebody from Comic Relief for the UK, a good cause where once, you know, once a year, they would have the big TV event to raise money for charity, so their website traffic is just pretty much flat and then they have such a short, sharp peak. The guy I spoke to was saying, how do we do continuous delivery here, and I was saying to him, why don't you run Comic Relief internally once a month, and then once a year is a real deal. Get people into that mindset of we've got the event coming up, we've got to have monitoring software. I don't know if they actually did it or not. I thought it was a good idea.
Dave Farley: Iff you remember when we built our exchange in the early days, you're not allowed to... The financial regulators don't allow you to release half an exchange. They frown on that kind of thing because it's other people's money. So when we built ours, if you remember, what did we do every Friday afternoon? We took the afternoon off, and everybody played at trading on the system, on the latest version of the system, in the company, with fake money.
Steve Smith: I remember that.
Dave Farley: We learned loads from those little experiments.
Steve Smith: I remember the tester cleaned up. I remember the tester made all the money. And he had a really good understanding of the business and the technology. I had forgotten about that.
Dave Farley: Yes.
Managing complexity in the cloud age
Steve Smith: Man, he wiped me out every time. All right, let's talk a bit more about managing complexity if we can because one thing that's happened with the cloud that's really good is a lot of complexity in the tools that we have has been taken away from us, which is wonderful, and that's awesome, and then we can focus more on higher-order functions. Yet what I see time and again is now, there's almost an offset where people are building and championing ever more complicated applications that turn out to be more complex than ever expected.
The easy example of that is Kubernetes. In 2017 when it came out, I remember saying this thing's great. It's way better than the competitors, and it solves a load of problems with us having to hand crank container orchestration. Then, we were kind of like, after a year or two, my colleagues and I at Equal Experts were like, wow, this thing is taking a lot of effort to operate, actually, in the wild, at scale. And then it obviously moved into AWS and we thought this is great. Now our problems are solved. After a few months, there's still a lot of unintentional complexity really creeping out. When you're at scale with, like, 40 teams and everyone's messing with YAML the amount of BAU work people are doing seems to be increasing compared to the old days, which is astonishing to me. But what are your thoughts on the kind of using the principles you've described around managing complexity for the cloud age, where we are running evermore complicated stuff, and someone's just running it for us, but the details of it continue to leak into the business logic they've actually tried to implement?
Dave Farley:, I don't think that's just a problem of the cloud age, I think that's a function of a problem that we've had in our industry for probably longer than that. Which is... I think we've already talked about it, but I think that we get over-fixated on the technology and the tools, rather than the outcomes. I was chatting to a mutual friend of ours recently, Martin Thompson, and he said one of the games that he plays with his clients when they're looking at improving the performance of a high-performance system, is to try and spot the business problem in the profile. Because nearly all of the time that is spent by software is in the accidental complexity of just the gubbins that surrounds the problems that we're trying to solve.
Steve Smith: That's really interesting... sorry. That reminds me of something you taught me, inside every technical story, there's a business story trying to get out. Something that comes back time and time again.
Dave Farley: Yes.
Steve Smith: In the performance profile, there's a business problem.
Dave Farley: Usually, when a from a performance terminology it's so neg-... the time spent actually performing business transactions is so tiny, you can't see it in a profile because all this, usually... nor the time spent logging or, you know, persisting stuff, or whatever else it is.
There are other ways of organizing software where you can kind of avoid some of these problems, and the cloud starts to help us to do some of those things, and so on. So, the tools that I talk about in my book about managing complexity are modularity, cohesion, separation of concerns, abstraction, and coupling. Modularity, we want to be able to divide the system up into pieces that we can deal with more independently of one another. Fundamentally, all of those things are about trying to make changing one part of the system without it impacting on other parts of the system, fundamentally. I would kind of argue that that pretty much... that's a reasonable definition of what a design is for, you know? The design is not for much else than that. It's to allow us to continue to be able to make progress, make work incrementally over time, and so on.
So if we're talking about modularity, and so on, one of the greatest tools, in my opinion, is the separation of concerns. If you want to try and make sure that each part of your system is focused on doing one thing. If you're writing some code that allows you to buy a book and store the book in the database, that's wrong. That's two things. You want to separate those two things out and deal with them more independently. When you start to drive your system to make it testable, deployable, and so on, I think it starts to push you to be creating code that is more modular, better composed. I had a funny experience while writing the book. There are code examples that demonstrate these, the ideas of managing complexity throughout the book, and at one point, I wanted to consciously write some bad code. So, I wanted to write some code, and I started writing... As a demonstrator, I could talk about it, and point out why it was bad. So, I started off writing the code the way that I always start writing code with test-driven development, and I couldn't write code that was bad enough to make the points that I wanted to make doing test-driven development. It was impossible. If it was testable, I'd already fixed some of the problems that I was trying to demonstrate. I had to stop working the way that I was and work instead on a different way of doing things.
I think that you're right, the accidental complexity intrudes. I think of the accidentals... these days, I think of the accidental complexity, I want to try and push the accidental complexity to the edges of my system, and I'm going to try and architect and design the systems to try and do that. Using automated testing to drive the design of my system, both at the big scale and the small scale tends to force me to do that because those are the tricky parts to test as well. I mean, the actual core of the logic is easy, it's to test how Kubernetes does, it's a bit trickier to test.
Steve Smith: Yeah, no, I'm just thinking of adapters as an architecture, and I'm also thinking about how LMAX, we had a really, really hard divide between the business logic and messaging.
Dave Farley: Yes.
Steve Smith: You can't import messaging code into the dot domain package because that was so pure. And that was really true DDD. Like, if you wanted to use that, you had to go for a translation somewhere else. And that kind of really, enabling constraints forced you to think about, you know, how do you decouple, and how do you separate, you know, the business logic from the messaging logic. It was really powerful. The book I consistently recommend to everyone in the street, whether they work in IT or not, is Dr. Nicole Forsgren's book "Accelerate," and knowing continuous delivery pretty well, I think. There was one surprise in Nichole's book for me, which was that the single biggest predictor of continuous delivery and success was loosely coupled teams, loosely coupled services. Which, given what we know, it seems ludicrous that it surprised me, but I thought that the single biggest predictor would be continuous integration. If a team is regularly building its code more than once if everyone's checking into mainline at least once a day, I thought that would be the thing that everyone just kind of hones in on. But the fact that it's the way that teams are set up, services are set up, I think it's testament to the power of separation of concerns.
One thing I consistently see at companies is when they want to introduce a new feature into a product, it has to go through multiple teams, you know? And people will say to me, oh, well there's dependency problems, how do we, you know, get these things to work better together? And they don't like my answer, which has you considered doing some DDD, and thinking just that's actually one team over there, and they will own this one thing within a boundary, this other team has to do something entirely different.
Dave Farley: I think that touches on several problems. One of the reasons that I wrote this book is that I think that engineering is an important idea. We’ve kind of grown this odd relationship with engineering in our discipline, and we've talked a little bit about it. It seems to me that on one front, we either assume that engineering means a kind of bureaucracy and heavyweight processes or on the other hand, we assume that engineering just means writing code. And neither of those things is true. Engineering in other disciplines is basically just the stuff that works. If you're not able to produce better software faster, whatever you're doing isn't engineering. The stuff that I'm talking about is the stuff that allows us to do better software faster.
Now, one of the things that you just alluded to from the "Accelerate" book, that Forsgren and colleagues talk about, is one of the other measures of success is the independence of those teams.
Steve Smith: Yes.
Dave Farley: The ability of those teams to make progress without depending on other parts of the organization, so autonomy of the teams. What we're talking about here is building a development organization that is modular, that's cohesive, where the bits are close together and encapsulated within a team. That means that you need to use the technologies of separation of concerns so that each... the thinking of separation of concerns, and so you design the team to be an information processing unit that's more independent of other information processing units. And so, the architecture of our system is mirrored in a more profound, deeper way than I think we sometimes think about than even Conway's Law.
Steve Smith: Yes.
Dave Farley: There are, at the roots, there are some really, really... I keep saying there are some really, I think, profound ideas. You know, when we talked about optimizing feedback earlier on, you know, feedback is vitally important. It's the feedback, and speed of feedback and efficiency of feedback is the way that space rockets balance on the thrust of their engines, it's like balancing a broom, you know? That's a feedback-driven approach. The modularity of our teams is one of the tools that we can use to make that progress. So, these are the tools, the tools that we can use to get to be able to build better software faster. And if we use them as guidelines in whatever it is that we do, whether we're designing our teams, or defining what the boundaries for a team is, you know, what piece of work they're focused on and so on, these are the tools that allow us to do those things, I think.
How to apply the separation of concerns in a team
Steve Smith: Yes. When I look at modularity, cohesion, and separation of concerns, I'm pretty sure I can apply that to team design, as well as service design.
Dave Farley: Yes.
Steve Smith: One of the other signals I look for an organization is if their teams have really stupid names, like we're the Jupiter team, or we're the Bananas team, or we're the... what I see the other day? Something really dumb. Teams named after cheeses, you know? That suggests to me straight away that your team does not own service, it is not modular, it does not have sole ownership of a thing that is decoupled from the wider world. As a result, you have, I don't know, four or ten more teams all learning chunks of the same thing. You have to give them ridiculous names because they have no identity. You can't say, well, this business function, you know, you own this, you're empowered to deal with this, you're accountable for this.
As a result, companies will say to me, "It's really hard for us to scale up to, like, twenty teams, Steve," and I'm kind of sat there thinking, could you get two teams done right first? Can you just go down to one, and then we'll start again, and then gradually move all the other teams over. I mean, how often do you see it where there are many teams in the company, and the company is, like, "Dave, we've got to have more teams, we've got to go faster," and you're like, but you can't because you're all so tightly connected? Adding more teams, things are just going to break louder and quicker.
Dave Farley: Yes. This is a systems problem. One of the reasons, one of the many reasons why I think engineering is so important is that the stuff that we do is bloody difficult. It's incredibly... writing software at scale, particularly with a big team, is an incredibly difficult thing to do and to organize. We've got to try and marshal these armies of intelligent people, solving complex problems at scale, and all of the pieces need to work together. It's an incredibly difficult problem, you know, at the limits of the kind of human ingenuity and creativity sometimes, I think. So, we need to use a tool... we need to use the tools that work. We need to be able to...
Steve Smith: It is very difficult, but if there's one thing that everyone who reads the book takes away, hopefully, it's that we need to stop shooting ourselves in the foot. Because you're shooting yourself in the foot straightaway by saying, I don't know... we're going to run everything in, I don't know, we're going to run everything in VMs on EC2 so we stay cloud-neutral. So, we're not looking to be a cloud provider. Then you've taken on so much extra complexity, you'd have to be scanning weeks on BAU just making all this bloody stuff work. When you could just take any provider, pick one of the big three, I don't care which one, and you just do all of it for me, and I'm going to think really hard about my business, and how to set up my teams and services for success. Teams didn't even get going because they've already... What was that phrase in British politics years ago? "Our bat was broken before we went out to bat."
Dave Farley: Yes. I think that's incredibly common. And part of that is just falling onto old patterns of doing things. I'll quote a fairly famous at the time article that Fred Brooks, the author of "The Mythical Man-Month" wrote a long time ago in 1986, in which it was called, "There's no silver bullet." And in which in it, he made a claim that there were no 10X improvements in software development that, you know, you could make in process or technology. There's nothing that's going to give you tenfold improvement in productivity or quality. I think that's fair that there might not be any silver bullets, but I think that there are mud bullets. There are bullets that are, you know, the stuff that we're not good at just rolling out. We know that Waterfall just doesn't work, we know that Gantt charts don't work, we know that building teams with hundreds and hundreds of people all working on exactly the same thing doesn't work. We know these things. We know that alternated testing works better, we know... there are things that we know. So if you are world-class, if you are world-class at software development, there's no 10X for you. There is no step that's going to give you a 10-times improvement. But there are loads of things that teams, that most teams do all of the time where I could give them 10X like that. If they follow my advice, there are loads of things where you can get more than 10 times improvement.
Steve Smith: Oh, sure, but there's no one thing. Brooks continues to be right all these years later, right? There was no and is no one thing. You wouldn't recommend one thing to them, right, you'd recommend a whole bunch of things to them.
Dave Farley: I would recommend a whole bunch of things, but I could give them a very small number of things that would give them a 10X improvement for many teams if they're doing badly. If you're not using version control, you're going to get 10-times improvement by improving version control, adding version control.
Steve Smith: Yes, that is the classic that Dr. Forsgren has talked about for so many years, which is great. The amount of time that she consistent has companies that aren't storing code in version control, let alone config, it's a great reminder people you meet and talk to in IT aren't the people in the worst of situations, because the people in the worst of situations are in basements somewhere, with no sun. With someone hollering and yelling at them and they're like, can we please have Subversion? Can we please? No, you can't.
Dave Farley: Yes. Yes. Very occasionally, I still meet organizations or at least teams like that. So, there's a lot of stuff that we know... So, one of the things I talk about in the book is I don't think as an industry we've been very good at discarding the bad ideas, we just accrete new ideas. We just kind of build this ever-expanding collection of ideas, and we don't throw away the crap ones. That's partly because we don't have a model for what it is that we're doing.
Rolling the wheel of time
Steve Smith: Yes, and there's also the ongoing joke about relearning. I'm old enough now to see people doing stuff, and I'm like, I was doing that in 2004.
Dave Farley: Yes.
Steve Smith: Definitely, when I see the fixational git as a version control and when I say to people there'll be a thing after that we'll all move to, and people look at me as if I'm cuckoo. They're like, no, no, no, this is it. This is it. “You said that about the last one”. “We said Subversion was it, oh but we want it to be decentralized, Steve”. I'm like, but we've all centralized in GitHub anyway, so what was the point in that? All we've got now is cheaper branching, which was a crap idea anyway.
Dave Farley: I saw a funny video on YouTube yesterday, as it happens, which was lauding the benefits of monorepos. Which I think is fair enough. I think there are benefits to monorepos, you choose those. But it just amused me, though that's another one of those cycles that we've gone around.
Steve Smith: Oh, tes.
Dave Farley: So, I'm definitely starting to sound like a grumpy old minger, we should...
Steve Smith: No, I mean, we did monorepo at LMAX...
Dave Farley: Yes.
Steve Smith: It worked well. But I had me and a Canadian guy, Derrick Lewis, working on some really clever custom build stuff, lots of clever packages import rules, and Checkstyle. I think was the replacement, I can't remember. It was a long time ago now. We did do a lot of custom stuff around it. We only had two teams, three teams at one point. If we were going to scale up, it would have broken the only companies that have monorepos working well, like Google. They've invested so much time and money in that custom tooling to make it work, when they even try to make the tools open source, they don't work. Of course, they don't, because there's so much context wrapped up in them, and years of organizational toil tied into them. I can't think of any situation I'd recommend a monorepo to anyone now because I wouldn't trust them to do it well, and also, I think the time you have to put into it to make it work. Plus multirepo, it does force separation of concerns. With monorepo, separation of concerns, it's so much easier to get it wrong.
Dave Farley: Yes. I will have to convince you over a beer sometime because I don't agree with you on that.
Steve Smith: You can try. You won't persuade me of that. That ain't happening. But Steve, you know, it will be really good... no, no, it won't. You're going to cock it up. It's too easy to cock up.
If there was one tool if I put a gun to you to say there's one tool you'd recommend to people, and it's not a version control tool, what would you actually recommend to people now? Because people are going to want to hear something, Dave. They're going to want to hear... they're going to want to hear Kotlin or something come out. You're not going to say Java, I know that, but you could recommend Trisha’s book. No, what's the one thing... And don't say IntelliJ.
Dave Farley: I do still use IntelliJ, but I wouldn't say one tool. I can't say one tool, because I don't believe this is about tool primarily. I've been privileged to work with some genuinely great software developers during the course of my career, some of them famous, some of them not. My observation is that if you gave one of those people a task to do in a language or a toolset that they'd never seen before, they'd still do a good job. It's not about the tools. The tools are important, but you can do a good job with any tools, and you can do a bad job with any tools. The tools don't define success, it's the way in which you wield them, and the way in which you ultimately understand and solve the problems that we're trying to solve. We're not trying to solve using tools better, that's not the answer. I think that we can make progress with the tools, we have made progress with the tools. You mentioned IntelliJ. I like IntelliJ because it was the first IDE that really started supporting refactoring. It did that in a fine-grained, simple way, that allowed that to become completely pervasive in the way that I work.
One of the attributes of the way that I work is, I talk about it in the book, the importance of making progress in small steps. That's part of this iteration and feedback thing, you must make progress in small steps to work the rate of feedback into iteration. Tools that allow you to do that kind of thing are important. There are bad tools, for sure, but there isn't some tool that's going to make you a better software developer.
Steve Smith: I know. I know. I wanted to wave the red rag because it's fun. But I agree. I do think that in the last 10 years or so, more recently, people do come up chasing me for a magic word, and when I say those magic words, they feel like I'm... either they feel like I've let them down, or that I'm keeping a secret from them. I'm like, there's no secret handshake, there's no... you know? You're right, you got me, we're keeping it from you. I don't know the first thing about .NET, but given enough time, I'll do a pretty good job of doing it because I know how to test, I know how to think, I know how to experiment.I wouldn't be the person to contact if you want it done in a day, but if you give me a few months, it will be kind of okay.
I guess when I think about it, I think about the thing that came before it. So with Git, it did fix some things that Subversion was a bit of a pain with. But then I think about two years ago, I went to meet Google about a big thing, we were talking about Kubernetes, and they started talking about GKE. I think they expected me to get down on my knees and say, thank you, Google, for GKE, and I was kind of like, thanks for G Docs, it's amazing. The thing before G Docs was terrible. They seemed almost alarmed that a practitioner, a developer... maybe a developer still, was thinking G Suites was better than GKE. This is the one thing that I recommend to everyone where the tool before it sucked so much. I think tools just kind of disappoint me now. Even something like IntelliJ is still really good, but even something like Kotlin has its flaws. I wanted to love Kotlin, and it didn't quite happen. Nevermind. There'll be another language tomorrow, Dave. They keep coming.
Dave Farley: Yes. It'll be the same as the languages that were invented in the '50s because all of them work, pretty much.
What’s next after Modern Software Engineering?
Steve Smith: I remember a couple of years ago, you were talking about wanting to do a book on this, and I'm really excited it happened. What happens next? Now you've done your big, floaty ooh, ahh to kind of book like I'm watching everything, and I can see the fundamentals behind it, what comes after this now? Because you've done the big thing, right? You've done the really big book that kind of we used to get in the noughties, what are you going to build on top of this?
Dave Farley: The reason that I wrote the book is that I think that I hope that the ideas in this book help people to structure their learning in other parts of the discipline, and help them to build better systems. I genuinely believe that if you adopt the kind of approach that I describe in the book, you end up with better software, you know, dealing with it more efficiently. I think that's true. So part of my thing these days, I am increasingly moving towards the direction, as you say, probably away from writing stuff because I don't write software professionally anymore really, but when I write software, I'm doing it to demonstrate ideas or to communicate. I'm starting to think of myself in a similar light to people, you know, the science communicators. I'm not practicing science anymore, I'm talking about it, describing it…
Steve Smith: Which is equally important.
Dave Farley: I think it's an important thing that we need. We need those sorts of people, too, so I'm doing that. I want to be able to try and help people to understand some of these ideas in a way that is genuinely helpful to them. But the windmill that I'm tilting at is that I'd like to try and move... People talk about post-agile all of the time as though agile was a busted flush, and I think that's a mistake. I think that agile was a necessary and essential step from the kind of plans approaches that we were misguidedly trying to apply. I think agile was a necessary step, but it's not enough. So, it's rather like comparing kind of Newtonian mechanics with general relativity. General relativity explains everything, and is built on... you know, it explains everything that Newton mechanics was describing, it was correct, Newton mechanics was correct within that frame of reference. I would say the same about agile, but what comes after that? I think that engineering is the next step. I'm not a big fan of the craft. I think it gave us some important steps forward. As you've said, looking back to what was before, the Software Craftsmanship idea and stuff are partially correct, but not correct enough. The reason why it's partially correct, I think, is because underneath, there's this kind of practical, pragmatic, informal application of some scientific ideas. I think that if we strengthen that scientific rationalism a little bit, focus on it a little bit more clearly, think we get more, and we get better.
I want to try and communicate that. So, the windmill that I'm tilting at is like, ideally, I'd like people in universities to start structuring their courses around the sort of stuff that I'm talking about. Because at the moment, when I was an employer of software developers, I couldn't, if I'm honest, I couldn't see the difference between somebody that studied computer science or software engineering, and somebody that studied physics or chemistry, in terms of their validity as a software developer. I would probably be happier taking a physicist because they're probably better at problem-solving than somebody that did a CS degree was my impression at the time. You'd have to brainwash them to kind of work in the ways that you wanted them to work in your organization. But at the moment, our educational establishments are not turning people out that are effectively trained to do great jobs. I'm not saying that the people are bad, I'm not saying anything like that, but we're teaching them the wrong things because we are too focused on tools and technologies and so on, and not enough on the ideas of problem-solving and those kinds of things, I think.
So, those are the ideas that interest me. I don't know whether I've got another book in me. I'm in that post-book period at the moment, where I'm thinking...
Steve Smith: And that stuff.
Dave Farley: Yes, I'm pleased to have written it, and I'm not ready to start anything new at the moment. But one day I might.
Steve Smith: I can't remember who said it, "I hate writing, I enjoy having written," which is definitely true. It's phenomenally hard to keep this many big ideas in your head in one book. I think it's astonishing. It is a lot of... a lot of work. I think people don't realize, you know? But I'm not going to... I'm going to resist the Craftsmanship rag you deliberately shoved in front of me, because I have people in the GOTO Book Club that like Craftsmanship, and...
Dave Farley: I'm sure there are.
Steve Smith: About Dan North saying craftsmen are romantics with big egos, which I completely agree with. Like, craftsmanship, please, let's make the code even prettier. I'm stopping... no, no, no, we're not going to do that. No. All right, what time are we up to? Let's maybe wrap up with something before somebody waves at me. So... I'm not going to mention DevOps, because that would be too easy. Oh, that'd be another one that people would be throwing stones at me next time I go to a GOTO conference. Okay, so how about... so, if we just think about the time between the "Continuous Delivery" book and this book, because they're the two big books of yours, though you did do a deployment pipelines book on lead pub, which I thought was awesome. Aside from something easy like the cloud era beginning, like between the two books, like, what big new ideas came out that you remember, that really surprised you? So, I'll give an example. Somebody at work said to me two years ago, "Who would have known that You Build It You Run It was the hill you would die on, Steve?" And now, I'm so far into that I find it as almost a super weapon that solves a lot of problems I see around engineering and around delivery of software. I would never have imagined that that was the thing that I would kind of really home in on, and I would be like John Osborn, really homed in, like, incidence as a tool for learning. Like, what thing for you has emerged between the two books in our industry where you think that's way more important than I thought it would be, and you know, it does an enormous amount of good? Don't say nothing, Dave.
Dave Farley: No, I wouldn't say nothing. I think we do make progress. I'm not quite sure that we make... I don't think that we make as much progress as we think we do very often. I think that it's usually smaller steps, and occasionally there's a big step. The one that you rolled out is the obvious one, which is the cloud, and the thing, the principal idea in the cloud that I think is important. And there's more to do. It's not done it enough yet, there's more to do, but it is to separate accidental and essential complexity. I think that one of, you know... there are some interesting things on the horizon moving forwards. I think some of the big ideas have been around for a long time, and have not landed yet. One of those, to me, is actor-based systems. LMAX, the exchange that we both... where we met and we worked together, it was a form of actor-based system. And, you know, telecoms exchanges are built with actors, and they are completely durable, and they work. And it's been kind of a minority sport for a long time, I think that its time is coming. I'm talking about next steps rather than what's happening now. And I think that might be a big step, because more than anything else that I know, it gives you more capability of separating the accidental complexity from the essential complexity, so it's a natural fit for cloud, in my view. I think that's coming.
In terms of, I think, looking back, it's hard for me to look back on the things that mattered, because I'm not... I suppose I'm unusual in that I'm not a big adopter of lots of different technologies. I tend to use a relatively small technology set, and do quite a lot of work myself on some things. So, I was recording a video for my YouTube channel about an open source producer who's sabotaged his own work. I would never be exposed to that kind of change because I probably wouldn't be using that much small open source stuff, you know? The stuff I was using would be bigger, and would be more tested and all those kinds of things. So, what are the big steps? I don't really care very much about the gradual evolution. The syntactic evolution of languages is not a huge bit. I think functional programs coming more to the fore, and I think that, you know, minimizing side effects is probably, you know, an important... you know, one of the big ideas. But it's not really new in that time frame, it's been around for longer than that.
Steve Smith: Well, we can't... we can go into containers because Jez Humble famously said to me.
Dave Farley: Yeah. I mean, I think that the tooling around continuous delivery, deployments, infrastructure as code has dramatically improved. And I think maybe the biggest change, the thing that I think is important is that we now assume, on the whole, for most kinds of tech and stuff, that things like version control, or the ability to automate those things and so on are important, which was something that Jez and I certainly complained about in the "Continuous Delivery" book, that we didn't... you know, we thought vendors didn't do a good enough job on those things. And some still don't, but most are doing a better there.
Steve Smith: Yeah, and the ISE has come a huge way, and I think that came out of developers getting more access to, like, operational concerns.
Dave Farley: Yeah.
Steve Smith: Okay, shall we wrap up? What's the... what's kind of the one message you'd like people to take from this book, Dave? I think I know what it is, but it's better if you say it than me.
Dave Farley: The one message that I'd like people to take away from this book is that what we do is a creative act, and that engineering is the most important ability to support creativity. And I don't think we think of it like that very often, but I think it's true. If you wanted one practice that people should take from this, it's working in small steps.
Steve Smith: Yeah.
Dave Farley: You know, make progress in small steps, and optimize everything to be able to sustain and support your ability to do that.
Steve Smith: So, engineering as an engine for creativity, I like that.
Dave Farley: Yes.
Steve Smith: That's nice, yeah, thinking about the creativity engine. All right, cool. Okay, well, it's goodbye for me, and it's... this sounds like I'm that British TV show. Goodbye for me.
Dave Farley: Thank you very much.
Harness is the only end-to-end platform for complete software delivery. It provides a simple, safe and secure way for engineering and DevOps teams to release applications into production. Try Harness for free.
Steve Smith: Thanks.