The Art of Unit Testing
You need to be signed in to add a collection
How can you leverage unit testing and test-driven development to create better software. Find out from Roy Osherove, the author of “The Art of Unit Testing,” and Dave Farley, the co-author of “Continuous Delivery”, what are some of the main considerations to be made before starting the design and why that is important.
Transcript
Good tests are crucial to any software project, and having a test-driven approach should be part of most projects’ goals nowadays. Yet testing can be difficult due to its complexity and challenges. Join a deep dive into how to approach unit testing with the author of “The Art of Unit Testing,” Roy Osherove, and a strong promoter of test-driven design, Dave Farley.
Intro
Dave Farley: Welcome. My name is Dave Farley. I am an independent consultant, and I'm interested in software engineering, continuous delivery, and in the context of this conversation particularly, automated testing. I'm here talking today with Roy Osherove who's written some great books around the topic of automated testing. We're going to talk today a little bit about his view on unit testing and his book on unit testing. So welcome, Roy.
Roy Osherove: Dave Farley, good to meet you and get to be part of this.
What is the focus of “The Art of Unit Testing"
Dave Farley: So tell us a little bit about your book and "The Art of Unit Testing" and what that's focused on.
Roy Osherove: I'm currently writing the third edition of "The Art of Unit Testing," but the original book was published in 2006 back when I only had two kids and… well, actually, I had zero kids and it finished when I had two kids. Then for the second edition, I already had another child. The third edition I promise will not have another child involved.
The first few editions were written in C-Sharp. That's the world that I felt the most comfortable writing in.
People say that you learned the most from your mistakes. So all the mistakes that I found myself making throughout various many different projects throughout my career and, kind of, summarizing them into a few main parts.
The first one is separating the idea of test-driven development from good unit tests. As far as I'm concerned, these are two different skills and they can be learned separately. The second one is the idea of categorizing or the three pillars of unit tests, which are readability, maintainability, and trust. And the last part of the book is about connecting the company culture into the world of unit testing because that's where I found after we get through the technical stuff is how do you actually get people to go and start doing unit testing or TDD or any other stuff? How do you accompany it into the organization is a very, very difficult part. So the third edition is in JavaScript because I wanted to learn JavaScript and the paradigm of functional programming can have a different effect on the design and the way that we do an injection, etc. So that's currently in progress and it should be done hopefully by the end of… in a few months.
What are exit points?
Dave Farley: I was particularly intrigued by your take on separating unit testing from TDD because that's not a way that I've thought about it. Well, I've seen them as separate things but I haven't thought about the quality of the tests divorced from TDD. One of the things that I like about TDD is the way it gives me feedback on the quality of my design as the design evolves. But I did like your take, reading through the book, on the focus on the separate tests. So one of the things that I really enjoyed early on in the third edition of the book is the idea of the exit points.
I sometimes teach TDD and have some stuff around the different styles of tests that you can write, but I liked the idea of exit points, which landed with me nicely as the focus for the test. I wonder if you could just explain that a little bit.
Roy Osherove: Absolutely. I'm happy you liked it because. I've been trying to find the right words for entry points and exit points for over a decade. And then you know, when you explain something enough to enough people, at some point, the patterns begin to emerge in your head, and you're like, "Oh, this wording seems to match." So entry points and exit points actually started from the idea of what is a unit, and I wanted to define the unit in the context of a use case or basically, kind of, a black box inside the system and the use case that has entry points and exit points. So an entry point, in-unit testing terms, would be the act part, the thing that you invoke where you first enter into the system and you start the process or whatever use case it is that you're doing. And the exit point is the behavior or the results that you get. And the exit points can be divided into three main categories, as far as I could see, which is either you get back a value. So if you invoke a function, you get back a value. So the entry point also acts as the exit point.
The second one is that you change the state of the system so the behavior of the system changes. So you might use a different function to verify that the system behaves differently. For example, if you're adding a user to the system, then you should be able to log in with a new user. And so the unit spans these two functions that are related to each other, the entry point and the exit point, and they're using the same state. The third one is calling a third party, a logger, or any kind… so sending an email for example. Anything that we don't have control over that we'd like to fake in our tests. And so these three exit points then map into a bunch of other things. They fall kind of like Legos, in my head, because now we can start saying if each exit point is a different behavior of the system, it can be mapped as a different requirement, as a different story, as a different test.
Dave Farley: I think that was one of the things that I liked was the focus on… I think you say at one point that you should write a test for an exit point. Whatever the behavior, even if one piece of behavior results in multiple exit points, you write a test for an exit point. I think I've been doing that but I hadn't thought about it that way. And I think that clarifies it in terms of ways of explaining it really nicely. The concept of there being a kind of behavior on the test but you're testing that in different dimensions is, kind of, an interesting one and a useful one, I think.
Roy Osherove: That's cool that you're connecting to it. The whole point of this is that we should be able to use the same language to explain why we think something is a good idea or not. And whenever I look at someone's test, when I do code, reviews, etc., I want to be able to explain to them why I think they should separate it into two different tests. And the way that I can explain now is easier to say, "Look, with the same entry point, you're now testing two exit points," which totally makes sense. But each one of them is tested separately and each one of them can fail separately. So it makes sense. It's not about having multiple asserts in the test, which is what I used to actually write in the first edition of the book, and it fell down because that's not what I actually meant. I meant one concern per test.
Dave Farley: So I've had the same problem in that I too have used the language have a single asset in the test. And that's kind of right but it always felt like it wasn't quite capturing the value because I've had people say, "But what about if I've got one thing with two properties and I want to assert both of them?" I say, "Well, yeah." I treat that as okay as testing those things together but this other thing was not. The idea of focusing on the exit point clarifies why that I was uneasy about some kind of combinations and not about others now I think. So I did enjoy that.
How can we encourage the adoption of TDD and Unit Testing?
Dave Farley: The other idea that you, kind of, touched on, which I think is important, both of us are true believers in the importance of automated testing as not just an after the fact review of the behavior of our code, but a way of organizing our development approach, I think. But, certainly, as far as I'm concerned, test-driven development or even unit testing hasn't caught on as much as I would have expected. If I'm a bit hardline and I believe that, we're not doing a professional job if we're not covering the work that we're doing with automated tests. In your description in the book, you touched a little bit on the problem of getting developers to adopt these kinds of practices. So help us, what more can we do to do that, do you think?
Roy Osherove: So I think this is the part that to me these days other than, you know, writing a technical book, I think that the part that the most amazes me and intrigues me is how humans behave because I think that is essentially the biggest challenge that we have as leaders, as coaches, consultants, whatever it is we are, each one of us at each company wants to do the best thing but not only in their head, they want other people to also do the same thing. So influencing behavior has been one of the areas that I've cared about in the past few years. And I've learned a lot about it and I wrote about it in my book about elastic leadership. And I also plan to include parts of it in the third edition, and I've started to talk about it in the second edition as well.
There's a book called "How to Change Anything," and I can put a link to it maybe later on. But the idea in the book that to me translated nicely is that there are six influence forces that enable or that disable a behavior. We usually, as technical people, just look at the first two, which is the personal ability of someone or do they know how to do something, and the motivation of a person to do something. We assume that if I just convince the person it's a good idea, and I teach them how to do it, then everything else just falls like dominoes. The reality is that a lot of times we have those conversations and people agree with us and they learn and then the day after, everything goes back exactly to the way it was and nobody knows why the hell everything got stuck.
To me, what that book presented is those extra two categories, which is “the social”, the people around the person, social motivation, and social ability of how do people react around the person, who does that person follow? Lastly, I think the most important for us is systematic influence. So the reward system in the company in which you work. What do you get evaluated on? What are the metrics that people care about? Which behavior will give you a better salary at the end of the year, doing this way of doing the other way? So the reward and punishment system at the company, the way people are evaluated, that's one, and the physical structures around the person. So for example, showing up to a meeting very late because you live very far away would be a physical example, or you want people to do pair programming but all the desks have this kind of, you know, those round desks, but you only have room for one person to sit next. So no one can sit next to you. So those are physical things.
To me, including that type of thinking checklist to try to understand why people don't actually change their behavior is a huge factor in at least understanding why I'm failing to change behavior and then having a to-do list of, "Okay, so I guess I need to change this or I'll need to talk to the people in charge of this for this to be able to change." And at least I’m not stuck anymore. This is not just facing an invisible wall and I don't know which windows I need to open. That's the very least. Yeah, so that's, kind of, the spiel of it.
Recommended talk: GOTO 2020 • The Coaching Leader & Architect • Roy Osherove
In the book, I plan to specifically give tough questions and answers for people that usually care about them, such as how long will this take? And what happens if my estimates become too long, will people actually stop listening to me? A lot of things are related to those things, and I plan to connect all the dots for that. There is no one thing. Each organization is different, each person is different. That's the key thing- we need to have a set of very small decision-making skills on what to do in specific situations when a person has a specific problem. And then we analyze it. I hope that, kind of, answers the question. Maybe I've been too generic. I'm not sure.
Dave Farley: No, no, no, no. I agree with an awful lot of what you said. I don't disagree with any of it. It was making me think about some of the things that I worked on. And wanting that, in the background, the one that I nearly always think about when I think about teams doing well is we spent an awful lot of time on things like we want to rearrange the desks so that everybody that's working closely together can talk to one another. So you just turn around and pair with somebody over here. Those sorts of things. And those little tiny things, the physical constraints and the social constraints make a big difference in the way in which teams start to think about themselves and think about what good is, I think.
Roy Osherove: Yes Those small things are huge. We're all remote right now. But when people still work in offices, having the dashboards and showing the build statuses physically on top without you having to log in, made a huge difference. That's the physical environment. It also creates peer pressure. There's a lot of things that happen. So people aren't crazy when they say I want people to work close to each other or to do pairing. It creates a different dynamic, even remotely, I've been able to do a bunch of pairing and remote mob programming with teams. And it was a completely different experience than just having a meeting and setting up for 30 minutes and then talking about something, just open mob programming for 9 hours, and then people can come in and out as they like. It kind of worked nicely.
Dave Farley: I think you and I are enjoying this a bit too much. We should probably move back to the unit testing focus of the book.
Roy Osherove: Yes. I wanted to challenge you with a small question on my end.
Dave Farley: Please, go ahead.
How would Dave Farley change “The Art of Unit Testing”/ The importance of Test-Driven-Development?
Roy Osherove: So far we are agreeing with each other quite a bit. Is there anything that you didn't agree with? Anything that you thought, "You know what, I'm not sure I am crazy about this idea or I'm not convinced about it," or anything that was...? I would love to hear because maybe...
Dave Farley: There wasn't anything in the book that I saw that I disagreed with. I wouldn't have written a book like that because I would have started from test-driven development.
It's not that I disagree with any of the things, it's just that I think that all of the ideas that you said are amplified by test-driven development. I don't think that they are in any way reduced. I prize test-driven development more highly than I prize the tests that I write. The reason for that is because I can think of almost nothing else that gives me the same kind of feedback, early feedback on the quality of the design that I'm producing. Working incrementally and driving my design through tests, allowing it to grow and evolve as my understanding grows and evolves seems to be the real value behind test-driven development.
The tests themselves are incredibly valuable and incredibly important when they are secondary to that value to me in terms of the impact that they have on my approach to development. So I always think about it and approach it from test-first, test-driven development. I am programmed to think that way. But that doesn't in any way diminish any of the things that you say. As I said, the idea that I'm stealing is this idea of the exit points for the tests because it's just a nicer way of talking about it than any that I've seen before. But there's a lot of value in all sorts of different descriptions, I think, in the book. So I can't think of anything that I read that I thought, "No, he's talking rubbish here." I didn't think there was anything like that.
Recommended talk: GOTO 2021 • The Problem with Microservices • Dave Farley
Roy Osherove: I wanted to share with you a bit of my dilemma why I chose to not write about TDD. I mean, I just talk about it in the first chapter and I cover it. The fact that I do it almost daily is part of the way that I work. But there are two reasons that I chose to separate it. The first is that books about TDD already existed when I wrote it, and they were very good and there was nothing else I wanted to add to them that brought some extra value. So just repeating them in a way that was, "Okay, so that's 2%, maybe the graphics would be better."
One of the biggest challenges that I think people are having in learning TDD is because TDD to me involves three different disciplines, which is writing the test first, writing a good unit test, and good design techniques. And so when people try to get TDD and they end up having to learn all these three things at once, and it's a huge wall to climb and a lot of them just don't end up climbing it because it's too difficult. I wanted to separate one of those skills and say, "Okay, look, even if you just learn the unit testing and then you can add on to TDD or you can learn both of them but realize that why you might have an issue is because you're trying to learn all these things at the same time."
No one can learn all these things at the same time and be productive. You also have to have a job at some point. So there are good books about design, there are good books about TDD. Books just about the act of unit testing didn't exist when I was writing this. I really missed just that sole purpose of the thing. I think it's also the point of it was to say… okay, you can give yourself a break and say, "The Art of Unit Testing" enables you to break one small part of TDD and start learning. Then you can jump onto some of the more advanced books, etc., and the TDD. So that was, kind of, a focus.
I think that was one of the main reasons. That's why when I teach TDD… I teach TDD training as well, just like you do, and when I teach TDD, I have to choose what to focus on.
I cannot just focus on all the skills together. So I teach TDD and I teach some refactoring up to a specific point, such as single responsibility, etc., and I teach good unit testing practices that will be like the 70/20/10, in a way, and then there would be a separate training specifically just about design and refactoring for people that already have that experience. I find it goes over easier for a lot of people.
Dave Farley: No, I think that's a very good point. I definitely think that the book has that focus. I absolutely recognize your point. From my perspective, the interesting part… No, I shouldn't say the interesting part. But the difficult part, the challenging part is the incremental design. And when I teach TDD, one of the things I spend a significant part of the time teaching design rather than anything else because that seems to be what the students in my classes need more than anything else. The mechanics of automated testing are relatively straightforward, but the philosophy is deep. And, as I say, I think that giving people different ways to think about it is a good way in. So I enjoyed the book and the ability to be able to separate those things. But I must confess it's not the way that I've thought about it before or come at it, I guess.
Roy Osherove: Well, that's good. It means that it's unique. We all bring something… you're supposed to bring something different to the table, our point of view.
What's your objection to using mock objects?
Dave Farley: One of the things in the book that I wondered about, so you advise people to avoid using mock objects where they could. In the draft of the third version, you said this. What's your objection to using mock objects? You very clearly say that you use them yourself and you would choose to use them under certain circumstances. But your advice, I think, if I can paraphrase it, is that you would prefer not to use them if you can find a better way. So could you just explain that a little bit?
Roy Osherove: Sure. So I think this relates, again, to the idea of the exit points where we had the three types. We had returned value, we had state-based, and we had the third party. And the third party is where I would exclusively use mock objects, and then everything else I would try not to use them.
There are these two main approaches to mock objects. And I come more from the let's call it the Detroit School, which is the more functional way and less of the object-oriented design way of creating the interaction. So let me just explain.
The way I've always worked with mock objects is to verify only the absolute end of an interaction, which is the exit point. So just sending the email, but not everything up to send that email. An object that talks to an object talks to an object. Every time I've tried that type of approach, I ended up with tests that are very, very brittle because they verify internal implementation that tends to change over time and, for a business sense, is not important. So what happens is that the design or the way the objects are designed is not designed through mock objects in TDD, but they're designed basically from a design sense, but not through a TDD for this function needs to be called three times.
I try to avoid verifying how many times a function gets called, how many functions don't get called, etc. because those types of strict mock objects, to me, create a relatively unsustainable testing code that people have to fight over, over the next 10 years. So for example, someone changes some internal implementation, and then you have 100 tests that are shouting at you, " Ah, you didn't call this function and you didn't say that you're not going to call it."So you have to go to 100 tests and start changing it. Just the act of maintaining those tests, I think, it's unsustainable over time. I think it removes a lot of the benefit that it was trying to create originally, which is test-driven design of object interactions.
So I just limit my mock objects to the end exit point, to the end interaction, and everything else I will verify it's state-based behavior or value-based. To me, that's a more sustainable way of creating tests that are a bit more future-proof because maintainability of tests is one of the absolutely critical things that break projects, especially if they have hundreds and thousands of tests and then they are afraid of changing the code. And that's the thing that I'm trying to, kind of, enable is to have future-proofing tests as much as possible that still bring value, that break when they should, and don't break when it's not important.
Dave Farley: I think that's a really good explanation. I would also say that kind of calls out another one of these design points. I'm sorry to go back to TDD again, but when I'm looking for TDD is if I've got that, kind of, chatting interface that I'm trying to mock, it's telling me that the coupling run between the theme that I'm chatting to and the code that I'm trying to test. So at that point, I'd be looking to refactor the design to improve the separation of concerns between the thing that I'm talking to that I'm mocking and the thing that I'm working on.
Recommended talk: GOTO 2019 • Lies, Damned Lies, and Metrics • Roy Osherove
Roy Osherove: And by the way, this goes to learning that I've had just in the past few years. I've been looking at this assumption of mine about mocks overall, by default, have a balance of negative unless you find the exact right spot, to kind of what you just said, which is no, but the whole point originally was that if it is so chatty, then you create a different design, so it's not as chatty, etc., or you end up with an exit point. So it's a way of achieving that. Realistically, what I'm still seeing, though, is that a lot of the designs that I end up consulting for already exist. They cannot be changed and they're already chatty.
So using mock objects automatically creates these huge, unmaintainable tests. So the other sense is that most people don't have a good design sense enough to say, "Oh, I'm very chatty. I should change the way the design behaves." And because that's missing, the bar for having maintainable tests requires a highly good sense of design. And I think that's a very high bar for a lot of people in organizations, unfortunately, I wish it was better. So I don't think it's realistic to ask a lot of people to do this when, you know, TDD is difficult enough without the design part, as they say. So what I wish is that it wasn't just like 5% of the developers that had that type of sense of, "Oh, I have these books in my head about good design."
When I come across this, I'll have not only the good sense to realize that I should not have a chat interface and change it, I should be there at the right time when it's being created, not by other developers. And I should have the sense that I'm allowed to change this type of design. Like, there are so many things that have to go right for that design to change so that, by default, I think mock objects for all the other places where it doesn't happen is maintainability that ends up happening. That scares me.
Realistically, pragmatically, I'd rather have less of them because the risk is smaller, even though there is less of a design push for internal interface design if you will. So I've been struggling with it in a way.
The effects of a bad design
Dave Farley: I think that's a good, kind of, pragmatic point. But I do think that's, kind of, surface an area where maybe we do just slightly disagree, just in nuance. So I think I agree with your analysis of the world. Most developers are not thinking sufficiently about the quality of their designs for monetization, at least. My view is that test-driven development gives us a tool that kind of starts to force you to think a little bit more about the care that you take with your design. And because you suffer the consequences and the consequences are that you have these hard-to-maintain tests. That's because the design is poor. That's because you've got bad design.
So you can start to use that feedback and learn from the feedback that we gain as we were writing the code and running our tests, how to start to improve our design. We've also got to culturally, as a group, start talking more about design. I mean, how many conversations do you get involved with developers where we're arguing about whether this language is better than that language? I don't care. I care much more about the nature of the designs that we implement whatever the technology choices that we make are.
Roy Osherove: Given what you've just said, and I agree, that I would rather live in that world and push people towards it. Even in the world of hardware, let's say, if you have a chainsaw or you have a box cutter knife, they do come with pieces of plastic around them that protect you because people running with knives and scissors can be scary.
So I think that realistically, pragmatically, we should be pragmatic about what we demand in a way from people and say, "Okay, maybe in three, five years, I think it will be realistic to say, maybe it's time to go back to a mock-driven API."
But realistically, in most projects, there's so much learning to be done anyway that the wall even just to teach at this point is hard. So, ideally, we agree, there is something to it. Realistically, you know, even for me, it would be difficult at this point to learn that language of interface design through mocks. I think it's more complicated. And maybe it just means I'm not as good as a developer. But that's what scares me. I don't want it to be "Oh, if you can't do that, then don't even bother." I want this to be an open playground where people come in and feel comfortable learning these things and getting better. And at some point maybe jumping into some of those extra designs, but the actual design is so it's difficult.
It's difficult for all of us. Like, how many people will understand the difficulty of mock-based API design versus mock-external, you know? And I've experienced both cases. There aren't a lot of people that have had both of these experiences. So it will be almost impossible to explain the pain without going through that. So I know, it's a tough subject.
Recommended talk: GOTO 2019 • Reactive Systems • Dave Farley
Dave Farley: I think that's a fair point. It's probably just me tilting at windmills hoping to improve the quality of design. So I think that's reasonable.
So just to wrap up, I've enjoyed our conversation. We could carry on talking about this stuff and go into more nerdy detail. There's lots of stuff that I was hoping to get into, but we're, kind of, running out of time. So I'm just going to say thank you very much for having this conversation with me. I've enjoyed it.
Roy Osherove: Thank you. I think it's been a pleasure. It's always good to talk to someone with your type of experience and knowledge and gain feedback and have these good conversations. Thank you very much for having me.
Dave Farley: Pleasure. And thank you to GoTo for facilitating the conversation.
Roy Osherove: Thank you guys and girls at GoTo. We appreciate it.
About the speakers
Roy Osherove is the author of Art of Unit Testing, Elastic Leadership, and the upcoming Enterprise DevOps. He works as an independent consultant, training and consulting on all matters related to testing, engineering practices, tech leadership, continuous delivery, and pipeline-based organizations. Roy has over 20 years of experience in the industry and been in most types of technical & testing roles, and these days he is working as a freelance consultant & trainer on-site for various companies across the world.
He is consulting and training small and huge companies on TDD, engineering practices, pipeline-driven teams, DevOps, agile processes & metrics, scaling up without ruining continuous delivery, and much more.
Dave Farley is a thought leader in the field of continuous delivery, DevOps, and software development in general. He is co-author of the Jolt-award-winning book Continuous Delivery, one of the authors of Reactive Manifesto, an independent software developer and consultant, and founder and director of Continuous Delivery Ltd.
Dave has been having fun with computers for over 30 years and has worked on most types of software, from firmware, through tinkering with operating systems and device drivers, to writing games and commercial applications of all shapes and sizes. He started working in large scale distributed systems more than 25 years ago, doing research into the development of loose-coupled, message-based systems — a forerunner of microservice architectures.
He was also an early adopter of agile development techniques, employing iterative development, continuous integration and significant levels of automated testing on commercial projects from the early 1990s.