Home Bookclub Episodes Working Effectiv...

Working Effectively with Legacy Code

Michael Feathers | Gotopia Bookclub Episode • April 2023

You need to be signed in to add a collection

Legacy code has been one of the problems that developers worldwide have been trying to tackle for a long time. But what is legacy code and how can you learn from writing tests that give you more insights into the system and the code? Christian Clausen, author of “Five Lines of Code”, talks to Michael Feathers, author of “Working Effectively With Legacy Code”, about their shared passion for testing, refactoring, and solving real-life problems with the help of clean code.

Share on:
linkedin facebook
Copied!

Transcript

Get more out of your legacy systems: more performance, functionality, reliability, and manageability

Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts.

In this book, Michael Feathers offers start-to-finish strategies for working more effectively with large, untested legacy code bases. This book draws on material Michael created for his renowned Object Mentor seminars: techniques Michael has used in mentoring to help hundreds of developers, technical managers, and testers bring their legacy systems under control. Enjoy an interview with him and Christian Clausen.

The topics covered include

  • Understanding the mechanics of software change: adding features, fixing bugs, improving design, optimizing performance
  • Getting legacy code into a test harness
  • Writing tests that protect you against introducing new problems
  • Techniques that can be used with any language or platform—with examples in Java, C++, C, and C#
  • Accurately identifying where code changes need to be made
  • Coping with legacy systems that aren't object-oriented
  • Handling applications that don't seem to have any structure

This book also includes a catalog of twenty-four dependency-breaking techniques that help you work with program elements in isolation and make safer changes.

© Copyright Pearson Education


Intro

Christian Clausen: Hello, my name is Christian Clausen. I'm a technical agile coach and author of the refactoring book, "Five Lines of Code." I'm sitting here with Michael Feathers, the author of the book, "Working Effectively With Legacy Code," which has helped thousands of developers since it was published in 2004. Michael, what are you up to these days?

Michael Feathers: What am I up to these days? Lots of things. I'm a chief architect of Globant, but I also do training and consulting independently as well. Yeah, I think I kinda, like, of broadened my focus beyond legacy code in the past five, or six years or so, but it's just been very fun. So I like to think about things at a deep level, and there are a lot of interesting things on the horizon now to play with. So that's pretty much what I'm up to.

Christian Clausen: Cool. You're not looking at legacy code anymore. Is that because you've fixed all of the legacy code in your vicinity?

Michael Feathers: No, no, no. I mean, people keep writing it all the time, right? So it's like, that's impossible. But, no, it's not that I'm not dealing with it. It's just, basically, I think my scope has expanded. I'm just looking at some other things also.

Christian Clausen: Yes.

Michael Feathers: I guess that's the best way to put it.

Christian Clausen: Okay. Well, that leads naturally to my first sort of question. If you were to write the book like today, is there anything about it you'd change?

Michael Feathers: Well, it's an interesting thing because I almost feel like the core of the book, the core ideas, are kind of like a little bit timeless in a way, and that's why I guess it still continues to sell very well now. But I think it would make more nods to current technology sort of like that it kind of like reaches the reader and says, "Hey, by the way, I know about this thing that you're dealing with right now," but still it's like it's this kind of a problem of refactoring and testing, that kind of thing. So it's more like how do you sort of approach the reader? That kind of thing.

There's a lot more now that I would say about how legacy code happens because that's been almost like the sideline pursuit I've had over the past 10, 15 years. It's kind of like you go around and you're helping people with all these problems and you're kind of like, okay, well, how do we avoid all these things, right? And Agile had some answers that have been several other things, but understanding the real mechanics of how it happens and really how you can avoid them at an organizational and individual level. It's been, like, a lot of what I've been thinking about, and I think I would go and add to the book. That and functional programming, because there wasn't very much at that point. 

Will AI make refactoring obsolete?

Christian Clausen: Well, that seems interesting because, to me, it will feel like legacy code is inevitable. It's just when code ages, it becomes legacy. I think also we should probably deal with a question that is very popular at this time very much in the media, and it's AI, and machine learning, and what effect that will have on code quality and legacy code in particular. Is refactoring gonna be outdated?

Michael Feathers: Well, it's an interesting thing about that. So I've been playing around with it a lot recently, and I don't know. I sense that we're safe for a little while in a way. I think there are a couple of different things that kind of go together with this. One is that we've had code generators for ages within software development, and there have always been people who say at some point, we'll just be able to generate the applications and we'll be set, right? But there are always, these interesting things like, you know, it's like what does the thing do well? How much effort is it to go and actually sort of hone in on the thing that you need?

And I guess, the other big thing with this too, is that code is still the representation language, right? We are generating code. We need to understand code well enough to be able to go and work with it and sort of like gauge the correctness of solutions because it's still... There's, like, the hit-and-miss thing with AI right now. It'll give you something that's good sometimes and other times doesn't compile, or it's kind of like off track because it's dealing with the ambiguity of your natural language, right? So I think we've got space in this.

But I think, in a way, prompting is just gonna become another form of programming, right? And then we have to kind of figure out what do we do to go and sort of like make the stable pieces we can generate and then the ones that require more attention. So we'll still need modularity at a very coarse-grained level in some way. Those are some thoughts about that, I guess, right?

Christian Clausen: Yes Sort of like a, it'll be a new type of REPL like ChatGPT.

Michael Feathers: Yes.

Christian Clausen: Yes.

Michael Feathers: Well, you have to coax it and prompt it, right?

Christian Clausen: Yes. And you have to guide it. Like you have to learn to guide it and say, oh, I know it's gonna trip out over these things. And, like...

Michael Feathers: Right. So it becomes a programming language, which is a little bit more frustrating, in some sense, but can do more for you, I'm thinking. Which is kind of hard to imagine, right?

Christian Clausen: Yes. Because programming isn't frustrating at all. I haven't worked much with the Copilot or something like that, but it seems like people were also very into that when it came out. It was like, oh, now we won't have to do these simple things anymore, we can just ask Copilot to write them for us. And I mean, especially if you're working a lot with tests that are fairly simple or, at least, follow a similar structure to how they usually would, then Copilot would be more helpful. I don't know, have you tried Copilot?

Recommended talk: Learning Test-Driven Development • Saleem Siddiqui & Dave Farley • GOTO 2022

Michael Feathers: A little bit. Not as much as some of the more recent things. I think the thing is that everything with AI now seems to...well, not everything, but a lot of it right now just turns into this thing of kinda like generate and test in a way. So you are given things, and then you have to basically...it's really on your back to go and figure out whether it is the right thing or not, right? So there's this degree, there's like this sweet spot, I think, between, yay, I can generate a lot of things. It's kind of like at what scale do I generate things to go and see that they are on target for what I need to do?

I've used it to generate test cases. I've just been kind of happy with what it's produced in many cases. I think if nothing else, you get, like, enhanced ideation. There are certain things that you might not think of and you sort of become better yourself in using it because it sort of, like, introduces you to more possibilities, and that's nice.

Christian Clausen: That's pretty interesting. I've also seen people using, I think it's called procedural testing or something, more and more where they generate a lot of sample input or whatever, and then they verify against that, I think.

Michael Feathers: Like fuzzing in a sense.

Christian Clausen: Something like that. 

Michael Feathers: It's great for that sort of thing.

Christian Clausen: Yeah.

Michael Feathers: Cool things. Guess we'll see where it goes, right? And it's kind of funny, I'm supposing that years from now, you or I can take a look back at this and sort of say, "Ah, those guys, they didn't quite know," you know. It's hard to predict the future.

Christian Clausen: When Copilot took over the world tomorrow or in a month.

Michael Feathers: Date stamp here? What's today's date? Today's the 20th or 21st?

Christian Clausen: 20th of March.

Michael Feathers: Date stamps for our. options here

What is legacy code?

Christian Clausen: Talking about testing, your book talks a lot about testing and it follows sort of this Agile alliance sort of massive on TDD, stuff like that sort of things. So before we jump straight into testing, how do you characterize a legacy code base? How do you recognize if you're sitting with a legacy code base?

Michael Feathers: Well, I don't think there's any hard line with it. I think legacy is a subjective judgment that we make quite often based on the difficulty and the hardness to understand something that we're working with. A traditional definition is it is code you got from somebody else, right? And at one point, I started throwing around the idea. It's like, well, maybe it's a code without tests because the way that you work in code without tests is qualitatively different from when you have the test to kind of like serve as, like, a safety net for what you're doing, right?

It's really a subjective judgment. I think the main thing I keep coming back to more and more is to what degree you actually understand what you're working on, right? And if you have trouble understanding it and understanding what its behavior is, then you're really in trouble, right? So everything is about getting that understanding either through writing tests or reading all those different things. There's really a slippery slope on that. Things can kind of fall apart.

Christian Clausen: That's an interesting take because I would normally say that it's level of confidence that you feel, and it's very close to what you're describing.

Michael Feathers: Yes.

Christian Clausen: The difference, I think, from my perspective, would be that I try hard not to have people read code because humans read code very slowly. And so the more of that I can skip, the better. And good method naming, for instance, and, like, having a good hierarchy of your code is a way to sort of eliminate a lot of the branches, hopefully eliminating a lot of the code. And then confidence comes from something else, actually, than understanding it. Like, do I trust that this works, even if I haven't looked at it?

Michael Feathers: Fair enough. I think in the book, I kind of like nodded to that a bit because I was talking about this thing of, like, to what degree you can be surprised by what you find, or what the system does, right? And I think this is a general thing. It's kind of like when we make systems, it's kinda like there's the general thing that it does and it's like it should be pretty much unsurprising, right? So if you find something completely counterintuitive in the code, either behaviorally through a test or just through reading things, you might be thinking, "Well, what else is here," right? It's like are things really to the point where they're so irregular that I feel like I'm lost or that I could be kind of tripped up by anything that happens? So, fair. I think it is the quality of the code base rather than our understanding. So I agree.

Christian Clausen: Yes. A test is one way to sort of gain some of that confidence. You also mentioned in the book, which I absolutely love because I also spend a whole chapter on that in my own book, is, like, leaning on the compiler, as you call it. Like, the compiler is so powerful. Type systems have gotten so good, you can gain a lot of confidence if you know how to work with them correctly.

Michael Feathers: Yes, definitely. And I think it's funny with that too because I guess what I'm describing on the compiler, the other thing that I talk about a bit is deliberately introducing errors to find out more about your code, right? Which I basically see as another form of testing, it's just never really quite seen that way. But it's like it's another level for gaining an understanding of what's going on in the system behaviorally. So, it's good stuff.

Christian Clausen: Yes. Definitely. And yours is the first formulation of TDD I've seen that includes making it compile. Because it seems to trip up a lot of people that red could also mean a compile error in the red-green-refactor.

Michael Feathers: No. And it's funny with that too because I think the very initial steps in TDD kind of happened in dynamically type languages, but definitely, a href="https://amzn.to/3KuzcUL" target="_blank">Kent Beck and people around him were working in Java at least at the time that he wrote the first book on this. But yeah, it's a piece of it, particularly for compile languages.

What's kind of funny for me is quite often, working with people on dynamically-typed languages. I actually prefer dynamically-typed languages in many circumstances, but it's also like there's this thing of, like, you have way more affordances with compiled languages, different ways of interrogating the code base. So it's a trade-off, you know? 

Christian Clausen: I'm a little bit unsurprised that you work well in dynamic languages because it seems like you have a lot of discipline in writing, testing, and stuff like that. Whereas I find my huge challenge when I'm working with dynamic languages is I don't have a lot of testing discipline, I have a lot of typing discipline. And so, I'm very challenged in my confidence level when I'm working in dynamic languages.

Michael Feathers: Well, can I tell you a little story about this? Because it's a little story I like to repeat.

Christian Clausen: Of course.

Michael Feathers: I think I put it in a blog a long time ago, but back when I went to college, we had a computer lab, and you'd sit in there, and you'd have your terminal and you'd work on things. And we were programming in Pascal, right? A very early language that way. And I was sitting there and doing my assignment and everything was going fine, but I looked at the person next to me, and on her terminal, I saw an erase subscript out of bounds exception or something like that. And I thought, oh my God, I've never seen that before, right?

And I'd, like, gone through almost all of this introductory course and stuff like that, and then I thought about why I hadn't seen that before. And the thing is because I taught myself programming in C from the beginning and then I went to college and started learning Pascal, which had a stronger type system. And I just realized that essentially in C, if you mess up, you're in real serious trouble because it's hard to know when you messed up and then basically just get these random crazy errors because you're over memory and stuff like that.

So I think there's this interesting thing about starting with unforgiving tools to go and sort of build your discipline in a way, right? Now, it's a terrible thing to advocate because it's almost like the thing of going in and saying, "Go out there and march in the woods barefoot for 12 hours and you'll be a better person," right? But I think there is a piece to that as well that the things which cause us to go and sort of develop discipline sometimes are very useful for us, at least very early in our careers. So, just a thought.

Christian Clausen: I agree. And, from my perspective, I started coding in just straight Notepad, no Notepad++, no fancy thing. So, I put my parenthesis, the close one, when I put the open one manually every time. I've never had a mismatched parenthesis, right? Because that wouldn't have worked back then. I couldn't have sat there counting them.

Michael Feathers: It's fair. But it's a thing. I think those are things that you need to go and sort of, like, I think, develop the discipline at some point in your career, then you can rely on the other things. And it's like once you get that mindset of complete attention and discipline, it's cool because you're able to go and dip into it again and dip into it on a case-by-case basis when you need it. So it's just another tool that you can have, I think. That's just my sense of it.

Code and debuggers

Christian Clausen: Just a side note, a side question, I've noticed the same thing with debuggers. I don't use them because back when I started coding, I didn't have one. And so, I do printf debugging always. That's the only thing I do, and it always works pretty much. Like, do you have that same experience? Do you use debuggers?

Michael Feathers: I did use debuggers before hearing about TDD and learning it, then I got out of the habit of it. Occasionally, I would fall into printf, but, for the most part, it is this thing of going in and doing it with tests, for the most part. The thing I always ask myself as kind of like a design thing is, how easy is it for me to go and figure out the thing I wanna understand? And if it isn't very easy, well there's something kind of messed up about the design. I need to do something to go and at least get things at the grain where I have a good testing affordance to go and get the answers I need. So, things like that. 

Christian Clausen: But then, I would expect that the code base is at least as complex as the domain you're modeling. That's my general sort of rule of thumb. So then, some things will be hard questions, right?

Michael Feathers: I think they will be hard questions. I think it's not like we have layers, like, traditional layers in software development as much anymore or that we advocate having them, but it seems to me that there are different, like, query layers within an application, right? So if you have a good domain model, in a sense, then you can think in terms of the domain and ask questions in terms of the domain. You might be dealing with an area of four or five classes, for instance, but you know that you can ask the question at the appropriate level for what you need.

Maybe you need to go and jump out to an end-to-end thing to go and ask something which encompasses more of the domain and more of what's going on, but you get this ability to query at different levels. And the design of the system should support that, right? I mean, at its base level, if we have something which is basically like doing accounting, we're gonna have like fundamental computations. They'll probably be held in maybe two or three methods that we should be able to understand well enough to go and query those in the test, even if we're not asking the big questions about how all the accounts interact and stuff like that.

So it's this thing of like developing almost language at all these different levels. And it's not like they're specific layers, it's just the language is present all through the system. I think that's the thing I would say.

Christian Clausen: I think I get that. But it feels like some things are just inherently very difficult to sort of tie lower out of the system. Some information is sort of embedded deep within it, and it's depending on all these sorts of other things. And, at some point, the complexity is there and you have to deal with it.

Recommended talk: Expert Talk: Scaling Down Complexity in Software • James Lewis & Kevlin Henney • GOTO 2022

Michael Feathers: I just keep going back to that thing of, like, that's a lesson for us. It's when the complexity is there and it's not easily approachable through an API of some sort, then I'm okay, what's wrong here, right? Even if I can't fix it right away, I just basically sort of always take that as a queue for design.

One of my favorite talks I ever gave was, like, in 2007 or 2009, I guess it was. It was called "The Deep Synergy of Testability and Good Design." And I think about writing more about this and this sort of thing, but it comes down to that thing that almost everything we can imagine is painful about testing is an indication of some kind of a design problem. If you address the design problem, then you get a better design but also better testing affordances. To me, that's a beautiful thing because it helps us get better at what we do. You're able to listen to the system and learn more about things through your experience of it, you know?

Christian Clausen: I would have the same view of it just with testing instead of...oh, sorry, with typing instead of testing. If I can't type this correctly, if I need reflection, if I need CAST, stuff like that, I haven't designed it correctly.

Michael Feathers: Totally. I do like to just kind of sidestep the whole thing off, like, is dynamic better or static better? I like 'em both for different reasons. And I think that it just comes down to context, what you're working on, and what you need. That whole area is just kind of, like, it's cool, I'm not gonna come down on one side or another on that. It's just what you need.

Christian Clausen: You'll also have an easier time on the internet if you don't sort of piss off either camp, so to speak.

Michael Feathers: Fair enough. Well, it'd be fun to piss off both sides.

Christian Clausen: Yes. Sure, sure. 

Michael Feathers: It's fun. How you get your fun, it's not recommended.

Advocating for testing in organizations

Christian Clausen: I've been through a lot of organizations, and now that we're talking about testing, I see a lot of them are struggling with testing. Like, it isn't as prevalent in the industry as I would sort of want it to be. So if a person is in one of those organizations, like, how do they get started? How do we move out of this rut or whatever we call it?

Michael Feathers: Well, I kind of like the silent approach a little bit with this too. But I think it's just because having a background as a consultant, it's like if you go in and you say to people, it's like, "Hey, you gotta do things this way," it's like they're gonna rebel, or they're gonna be kind of like, "Who is this person who's telling me these things," and, "Sure. I feel like what I was doing was good before, so prove it to me," that kind of thing, right?

But I think an interesting thing is, like, as an employee of an organization, you just have to do your work and you have to do it very well. And the thing is, if you discover that these things help you, and chances are they will because that's what they do, these techniques, then it isn't very long before people start to recognize, wow, this person's having less trouble doing things, what's going on? The people that are interested in getting better will gravitate and learn, right? And it's very much like sometimes leading is not a very overt act. It's just by going and sort of doing something different and getting people curious enough to go and try things out.

The thing that I find problematic quite often is the people that are like, "I find a better way of doing things in the organization," they try to go and lead and say, "We have to do it this way, we have to do it this way." They just create enemies, right? You have to keep your passion intact a little bit and just sort of, like, if you enjoy what you're doing and you're making things better, quite often, it's gonna have some kind of galvanizing effect inside the organization. For a consultant, things are a little bit different because quite often, you're on the hook to go and change things, and then you have to go and basically sort of, like, sway people to some degree.

I think a little thing that I used to do with this is trying to find the people who are really curious about things and also the people that other people would listen to and convince them, then it becomes kind of viral, in a sense. And it's great when you find a person who's in both of those camps. You know, it kind of helps to some degree. But I think the other thing to recognize too is you're not gonna get 100% in many organizations, right? That's just the way things are. People are different. They have different views about how they're gonna go and handle things, and that's just life, you know?

Christian Clausen: So, I'm assuming, as a chief architect, you have teams working sort of under you or?

Michael Feathers: No, I think the chief architect is kind of like a bit of a moniker in a way. I think I kind of chose that title within the organization in Globant just because I wanted to highlight to my friends outside of the industry...or, excuse me, outside the organization that architecture is important. I think after many, many years of Agile, there's been a thing of like architecture just kind of emerges. And I think that the kind of thinking that we do about the macro level of systems is extremely important and that kind of thing.

So I do work with different teams across the organization and things like this, various client accounts, and stuff like that. But it's not a direct architect role in that way. I think it was more of a signaling thing on my part in a way.

Christian Clausen: No, I understand. So, your teams or the teams you work with, how do they work? Like, do they use TDD and stuff like that?

Michael Feathers: Yes, some do, some don't. The thing is, it comes down to the type of engagement that people are being called in for. It's funny because there are many different scenarios for going in like intervening in an organization and producing value. So quite often, it's definitely on the pallet of things that we do.

Christian Clausen: All right. Well, and it's super interesting what you're saying before that if you're in an organization as a developer and you start using some tool that helps you, right, and you're gonna keep using it, and somebody else is gonna see it. I've met people in the industry who had the opposite experience with testing where I would come in because they were on their way away from testing. I was like, "What? Why are you going...like, why are you doing that?" It's because tests can sometimes hinder sort of this refactoring thing, particularly if they're very structurally dependent.

Michael Feathers: I think the interesting thing to me is, sometimes, I think I feel much more comfortable than many people putting some of the tests in the parking lot for a minute and saying, "Okay, I'm gonna go and change the system." I'm going to go ahead and do a structural refactoring, but I know the tests aren't gonna cover it completely. If I can develop confidence in another way to do that refactoring, then I'll run the tests and find out, well, they aren't working against the methods they need to work against. And I'll rewrite tests that will cover the new structure that I have.

I think that we can overly valorize the tests sometimes and think, "Oh my God, we can't get rid of any tests at all." Then you're in a situation where you're just so scared that you can't change anything, right? And I guess the other thing you're probably getting at is kind of like the level of testing and how do you develop in a way where you aren't sort of, like, preventing yourself from refactoring, right? That's an interesting question for these things.

Christian Clausen: And I've found that a lot of... So a lot of the struggles that I meet with, particularly testing, would be people who aren't experienced with it when they come out of school, and then they don't have time to learn it. And it's not an easy skill, especially because when you talk about testing, most people focus on the red-green-refactor or, like, sort of the processes around it. Whereas I find that the most painful thing is learning about how to stub things correctly. Because if you learn how to do test stubs, then everything just becomes so much easier.

Recommended talk: When To Use Microservices (And When Not To!) • Sam Newman & Martin Fowler • GOTO 2020

Michael Feathers: I think there's that. I think the other thing that's wild for me to go and kind of notice is I think that in the way that TDD kind of spread across the industry, it's like some people just basically took it to be, like, okay, you write one unit test harness for each class of your system and you're good. Right? but then like BDD came along and it's kind of like then you're covering a bigger area.

Particularly when you look at what Kent has done with this and other people, like, I like Ian Cooper's take on this as well. You start out growing tests from a particular point, and then you're refactoring outward. You're decomposing things as they get bigger. So your tests are over here, but they cover a larger space. I think that's the key message that needs to get across to people rather than this one-to-one mapping of test classes to production classes. I think that's a way where people kind of gets themselves...they kind of paint themselves into a corner to some degree.

Christian Clausen: Especially when you start testing with, like, private methods and stuff like that. If you don't stay to the public interface, then, like, you are just in for a headache when you want to restructure that code.

Michael Feathers: What's wild about this too is that essentially it's like, I think Martin Fowler talked about this years ago. It's like we don't have a good thing now to go and mark things as published rather than public, right? So public is a code-level thing at the class level. A method can be public or not, but you need something which is kind of, like, this is an interface that we see from the outside world but not necessarily within this particular area and unit. Of course, it goes and kind of, like, differs across programming languages. But, you know, that kind of separation is something that's not built into languages. So, it's not overt and we have to go and make our sense of these right here are the public methods that we are holding in variance on as opposed to these other ones, right? So, pretty tough.

Christian Clausen: I remember having trouble also when I wanted to write testing code that tested, like, not quite public to the, like, API, but public methods. And then the testing classes would've to be in the same package because otherwise, they weren't visible. But I didn't wanna compile them in the same unit, right, because I didn't want to ship my test code. So it was just a whole... Trying to design around that took, like, days to figure out the beautiful, sort of...

Michael Feathers: I tend to try to convince people to go and ship their test code in with that just because, well, particularly, I guess, for me, I get called in to look at these really horrible situations. You don't have many other options in terms of actually sort of easing the entry into going in and starting to get control of the code base short of actually going in and shipping them in parallel that way. There are some valid reasons not to do that, but I think it's still something where if you can have that discussion and move to do that, sometimes that kind of eases the transition into going and building an easier way of working with a legacy code base.

Breaking boundaries while refactoring

Christian Clausen: It certainly changes sort of the whole view on I want to ship the thing that I've tested, but I don't want the tests in the thing that I'm gonna ship. So it's sort of like the constant struggle.

Michael Feathers: Sometimes, that's kind of like a sense of...it's like an aesthetic thing. It's kind of like, oh, well these things are different. They shouldn't be in the same place. Right? I don’t know if I mentioned this in the book, but I do mention it in training quite often. I think it was Voltaire who had the saying, like, when you translate French to English, best is the enemy of good, in a way, right? So you're sitting there and you're working with somebody and it's kind of like they're saying, "Wow, you're breaking dependencies using this technique and it makes the code look ugly." And you have to kind of like, "Have you looked at the code," right? I mean, it is kind of ugly, right? That kind of thing. Some of the things that you do to go start to break dependencies and get tests in place are going to violate some preconceptions you might have about good design, but they are there to facilitate doing the refactoring to make the design better. So, it's cracking the eggs to make the omelet a little bit.

Christian Clausen: I mean, when you say that, we have to talk about encapsulation.

Michael Feathers: Yes.

Christian Clausen: You also mentioned in the book that you don't mind breaking encapsulation if it makes testing easier.

Michael Feathers: Well, the thing is that when you say it that way, and I'm not sure exactly how I said it, but that simplification is good, but it's like...it's not saying, "Hey, I'm a fan of breaking encapsulation, but selectively in particular places to go and give you the affordance to go and test things," right? And you do it reluctantly, but you're doing it in such a way that when you're encapsulating, you should be thinking about what it is you want to encapsulate, right? So to go and break encapsulation at the edge of a new sphere that you want to go and hold as your place of encapsulation is okay because you're creating a new boundary around something of value.

So it's a matter of finding the things of value that are there and saying, "Where do I want to basically sort of like, be outside of that to test?" And you might have to break at those edges to go and start to form this thing, form this layer around the value that you're trying to go and sort of preserve through tests. I tend to think rather visually, so of course, I'm using my hands doing this kind of thing. But when you think about it, a lot of systems, it's like they're broken down in these pieces in particular ways, in particular shapes.

It's like sometimes you're looking at them and you're like, well, these three things can be part of a bigger thing. So it's up to me to go and build that bigger thing around those things, right? And other times, you have a thing which has several responsibilities, and when you break it apart and then you have a different task, which is gonna be, how do I go and support this from the outside of the test to be able to break it apart, right? 

Christian Clausen: If I'm working with something that's, like, legacy code, I would sort of attempt to limit the tendency of local changes to have global effects, right? That's sort of what I'm scared of. That's the issue with legacy code is that the effects propagate in a non-local way.

Michael Feathers: Yes.

Christian Clausen: And humans can't seem to get that into their heads. We are used to local, local. So, often, I would try to encapsulate things harder because that helps to sort of limit all of these effects, right? They can't escape if you encapsulate them in a very hardcore way, so to speak. And I noticed that you mentioned also the three ways that effects can propagate. And it's like exposing data, obviously, constructors and passing arguments or mutating it directly.

Michael Feathers: Yes.

Christian Clausen: So then, my approach would be or my intuition would be to sort of limit the effect of the things. But then, you're saying that you would have the test to sort of alert you, at least, if something changes non-locally, right? That's what the test should be there to do.

Michael Feathers: Well, I always look at the test as basically, a way of understanding the thing, right? If the thing changes in a way that you didn't anticipate, then basically, the test is going to go and break. So for me, it's the work leading up to writing the test is about going and building the isolation so that the effects don't propagate necessarily, right? Or at least you have a barrier to go and make sure that they're on propagate, right?

If you are accessing a singleton inside of a class, it's like, yeah, I wanna find a way to go and inject that value through the constructor rather than basically go and mutate the singleton directly within the class, right? You slowly go and start to build these firewalls against effects propagating when you're doing this. But a lot of these tend to be bigger issues in the architecture. So it's kind of like it takes a while to go and start to root those things out, and you have to go and sort of assess the value of doing that when you have the test there to go and tell you about whether these side effects occur in a bad way.

I think it's a very problematic thing because one of the things in legacy code that's tough is that code grows in a particular way. And it's like if you have like lots of global assets all across the system, you're not gonna be able to fix that in your lifetime necessarily, right? But you should be able to go and at least build in the sensing through tests to go and understand where the trip wires are. If something goes wrong, you wanna know about it immediately, right? So the code is less understandable by reading it, but then, the tests can go and give you the feedback that you can't get by reading it because things have gotten gummed up in a particularly bad way. So, there is this question of legacy code bases about how far is reasonable to go in terms of your plans for how good it's gonna go and get.

Balancing time investment: fixing vs avoiding legacy code

Christian Clausen: What is reasonable? How much time should you invest in fixing legacy code or avoiding it?

Michael Feathers: Well, here's the thing that I think is a very, very critical thing. You know about, like, the 80/20 rule, right? It's like 80% of the work happens in the last 20% of the project and all these different things. It's part of something called the Pareto Principle, right? And there are a lot of natural systems that are this way where essentially, like 20% of the code has a higher value than the other 80%, for instance, stuff like that. If you do a distribution of method sizes in a project, you'll find that a lot of the methods tend to be small, but then you have maybe 20% that tend to be outsized in size and stuff, right? So it's a mathematical thing that happens as a side effect of incremental growth in systems, or incremental value seeking.

The thing I'm always looking at with a system is like, okay, well what are the high-value areas, right? And my notion of value is a little bit generalized. It's not just what the business thinks is important, but it's also, like, what are you changing frequently right now, for instance? What are the things that... What are the more bug-prone areas? So it's not valued as much as kind of like the highly active areas of the system, the areas where basically, you have some criticality, right? You look at those and you're kind of like, okay, what can I do to go and bring these points into stability?

And the interesting thing is that the 80/20 rule applies to commits also, right? There are many, many files in your system that you'll probably never change again, right, and there are others where basically, you have clusters of changes. And that's gonna change over time in the system, but if you're aware of those dynamics, then you know where to concentrate your effort. I think that the thing that's kind of important with this is that you can look at a particular area and recognize it's a very stable area of code. It's kind of messed up in the way it's structured and stuff like that, but you're not gonna get a return benefit for investing in that beyond a certain point.

It's like you wanna stabilize it, understand what's possible within that particular area of code, but you don't wanna kinda like bolster it up in a way that makes it a place which is extremely easy to go and work in because you're not gonna get the value back from this. So I think having a very comprehensive conversation in the team and within the organization about where these hotspots are and where these value centers are within the system, and understanding what you're gonna concentrate on is very important.

A lot of this is outlined by Eric Evans, like in domain-driven design. You have like the notion of down to context and these different things. He mentions with like anti-corruption layer and all these different patterns that you're really... It's really about going in and finding these high-value areas and sort of like working on that. I think that's the thing. It's not an easy thing necessarily, but I think the thing we need to recognize is that value is not uniformly distributed across systems. It simply isn't, and we should behave differently in different areas of the system because of that.

Christian Clausen: I know that hotspot detection is a major part of the code quality tool CodeScene. And do you use tools like that?

Michael Feathers: Well, I've used it a bit. I'm actually on the advisory board of CodeScene, right? So it's, yeah, I know Adam Thornhill from way back when he wrote his first book. Great guy. I like that because I think the interesting thing I remember seeing years ago is kind of like there's a lot of information in our repositories that we just don't think about. And he just sort of launched in there and started going and digging and developing tooling around that sort of thing. So I think that's extremely valuable.

I think it's one of those things where you can either work on things without much awareness or you can build awareness. And a lot of this tooling goes and gives you the ability to go and find hotspots. We also do things like planning, like, oh, if a person's gonna leave, whether it has an impact on to the code base, and various things like that that are good for the sustainability of systems. It's like SonarQube is good too. There's lots of tooling out there that basically supports this kind of an effort, and that's good.

Recommended talk: Software Design X-Rays Part 1/2 • Adam Tornhill & Sven Johann • GOTO 2021

Christian Clausen: How should an organization sort of tackle, like, this problem? What should they start... Like, how much time should they invest? Like, what policies should they put in place to get some of their legacy code down?

Michael Feathers: Yeah, that's an interesting thing. I think it comes down to that kind of value analysis I was talking about a little bit earlier, right, and just sort of figuring it out. There are things like application portfolio analysis and all these different things that people tend to do, but you need to figure out what's critical. The term I use quite often is rules of engagement. If you figure out where the very critical areas of the system are, you might say, well people can't just come and commit against these things. You need pull requests with particular people, then you need to go and review them, and you do this to go and sort of like sort of build up stability in these particular areas. In other areas, it might be just, well, anybody can go and sort of commit against this and that's okay because it's low criticality. But getting people to see that value across the organization is the first step, that kind of thing.

The secondary thing is figuring out whether you're organized in a way to go and do these things well, right? A lot of times, historical reasons lead us to go in having different fault lines in organizations. We're separated into different teams that end up producing strange Conway's Law effects that are kind of like there. I am a fan of the Team Topologies' work. I think it's really kind of good. I think if I had any little criticism at all, it's kind of like it arrives at a very normative...it arrives at a very, like, this is the way you should organize. And that's good because it does seem to cover a lot. But I always tend to want people to go and think about what are the forces that lead you into trouble, and how can you actually sort of like move them in a way where basically, the problem disappears in some sense.

So I think those are kind of like macro-level things. And beyond that, it's like building a culture of refactoring. I think to the degree that basically, the refactorings are talked about in retros that people work on refactorings together that people can actually sort of speak up when they think something's bad, right? I think is the important thing. It just really comes down to basically, raising awareness within teams.

I think also kind of like connecting with the... Having the developers connect their pain with a solution, right? It's always been troubling to me to find developers that are doing very painful things. They don't think about it as pain, they think about it as normal, right? And then you can show them how they can make their environment look better and then they're kind of like, "Oh wow, I have something I can do here," which is kind of powerful. I know this is a long answer, but the whole 80/20 thing is so valuable there, right? Because essentially, it's like if you make a little area of the system better and you're there because you were called in to fix something if that's an area where you get lots and lots of change, you're going to make things a little bit better and then the next person's gonna get that benefit.

The areas where you do this that are high criticality are gonna get a lot of change, and then the things that you do to make things better are gonna give you an almost immediate return on these things. Maybe other areas of the system are just never hit in the sense that you never have to go and add testing because nobody's ever changing them. But it's not like you look at a 10,000 file code base and say, "Oh my God, we're doomed because we'll never get tests for that 10,000 file." So, there's a certain nugget of the system that basically, your set of nuggets of the system that once you start making a headway, then you start feeling the benefits. And that's a very useful thing to get across.

Scratch refactoring

Christian Clausen: When you have to make a change of some very complex part of this system, people will sit down and try to understand it because it feels suspiciously like work, I think, you say in the book.

Which I thought was a very humorous way of saying it. I sort of try to hack into that and give them something else that is valuable, like breaking up methods. And it's easy also, they can do it without their brain, right? Whenever your eyes glass over, have people do something sort of meaningful with that time because they can't tackle problems if they're cognitively exhausted, so to speak. Like, have them do something with their hands instead.

Michael Feathers: Totally. Just one little plug for something else. There's a small thing I mentioned in the book that I wish I'd written about more. Every time I talk to people, I sort of offered them as a possibility in a scratch refactoring, which is kind of like, take the code, throw it into a file, like just a straight text file as opposed to like your program language file so you don't have all the markup about possible errors and stuff like that. And just start renaming things and moving things around. Don't worry about breaking things because you're never gonna check it in.

That is so counter to our intuition as developers because we're always so cautious about how we change things. But when you know you're not gonna check in things just by going in and being kind of hands-on, you start to go gain much more insight into the thing that you're working on. Even if you can't actually sort of like fix things right away, at least you get a sense of where the danger zones are and stuff like that.

Christian Clausen: I love the scratch refactoring thing also. I noted it in my notes because it's very similar to what I would just call all of the refactorings that I do basically. Except I also check it in and I take smaller steps and a little bit of that extra things. But it's very much the thing that if you try to understand and if you start with this if you think about it too big, you won't go anywhere with it. It'll just be stuck. Sort of paralyzed by the opportunity.

Michael Feathers: So that's getting past fears, taking the first step. That's the way it goes.

Christian Clausen: Trying something out and like just, yeah, doing it. As you also say, start writing like a single test or something. If you have something like you can start improving the quality fairly quickly, just a little bit.

Michael Feathers: To me writing a test is asking a question of the code base. And if you're curious about stuff, then you should be, because you're working on it, writing those tests.

Christian Clausen: I think we don't have too much more time. Is there something you wanna sort of plug here before we end it up?

Michael Feathers: Not necessarily. I think the thing is it seems like the AI stuff is gonna change things a lot in the short term and that's maybe we look back at us and say, oh, well we thought it was, but it didn't. But I think the bet right now is that it's gonna change a bunch of different things. The key thing is that when you're working in legacy systems you get a real opportunity to learn more about how design works in a way. And so, something is exciting about doing that, you get to learn more about design.

I think that a lot of the things that we basically know about design are things that are gonna be still, very important as we move forward in the industry. It's hard for me to imagine a situation where for all development we get to the point where we don't have to think about cohesion and coupling and all these different things. We might have to think about them at a different level, but so much of learning about these things is immersing yourself in it and legacy code is a place to learn. And that's the way I kind of look at it.

Christian Clausen: Spot on. I totally agree.

Michael Feathers: Cool.

Christian Clausen: Thank you very much, Michael, for sitting down with me. It's been super fun.

Michael Feathers: Yeah, it has. It's fun. Thanks for your interview. 

Christian Clausen: Bye.

About the speakers

Michael Feathers

Michael Feathers

Working Effectively with Legacy Code