Home Bookclub Episodes Looks Good to M...

Looks Good to Me- Constructive code reviews

Paul Slaughter • Adrienne Braganza Tacke | Gotopia Bookclub Episode • October 2024

You need to be signed in to add a collection

Paul Slaughter and Adrienne Braganza Tacke delve into the critical role of communication in code reviews, emphasizing how soft skills can significantly enhance the engineering process. Adrienne Braganza Tacke, drawing insights from her upcoming book, explores the expectations for software engineers in code reviews, offers practical tips for improving communication, and shares her unique perspective on the parallels between writing and reviewing code. Their conversation highlights the importance of fostering a positive feedback culture and leading by example to create a collaborative environment within teams.

Share on:
linkedin facebook
Copied!

Transcript

Intro

Paul Slaughter: All right. I'm so excited to be talking to you, Adrienne Braganza Tacke, because I'm so passionate about code review. And then I heard that someone's writing a book on code review. And then I get to read it and talk to you about it. So, this is... I'm jittery. And I've had to hold back so much that I wanna talk about. So, I appreciate you taking time to do this.

Adrienne Braganza Tacke: Yes, thank you. It's really awesome how the internet can connect people in these ways, and so, you know, the fact that I have you interviewing me for this book, I have written about something you have created, and so, hopefully, you can get into more of that in this podcast. But I'm happy to be here as well. Happy to be talking to you about code reviews, as always.

Paul Slaughter: Oh, great. So, brief introduction. I'm Paul Slaughter. I work at GitLab, big open source, monolithic application for the single...single application for the entire DevOps lifecycle. I've been working there for about seven years. We do a lot of code review, code review, because we're an all-remote organization. And I've taken... I stumbled upon it, but I also was excited to do it, but a lot of us are promoting our internal code review process. And, yeah, I was so stoked to see someone write a book on code review, because then this means I don't have to write the book on it. I'm glad that you did this. You did such a great job on it. So, that's a little bit about me. The bit that you're referencing is, as part of my work at GitLab, I spun off a commenting system, called Conventional Comments, which I just wrote in, like, three hours, but was just trying to put a name to a practice we were doing, that, because I was interacting with people from Europe, and Asia, and everyone had different speaking styles, it was helpful to over-communicate and clarify the intent of the comments we were sending. That's the little, little bit I have to contribute to the code review community. Could you introduce yourself a little bit, Adrienne?

Adrienne Braganza Tacke: I'm Adrienne Braganza Tacke. I wrote this little book called "Looks Good to Me," which is what we're gonna talk about. I am a software engineer, or let's start with, like, over. Accidental software engineer, turned accidental developer advocate. I am currently working at Cisco, as the DevNet program community lead and developer advocate. A lot of the steps in my career have been falling into them. As I mentioned earlier, we can get into that a little bit later. But I have always been really passionate about code reviews, ever since the very first one that I did, because it was such a different experience from what I do now, that I kind of took that with me at every development job that I had, and to see how different teams did them, it kind of helped build this knowledge base of what I thought a code review should be. And so, yeah, all of that's in the book, and all of that we can talk about today. So, I'm very, very happy to talk about code reviews in any sense, and to anybody that will listen to them, because they're so key, and yet, they're kind of like the forgotten piece that nobody really, you know, they're there, but nobody really wants to acknowledge it, or they have a different opinion on them. And so, it's good to talk about the basics. To me, these are very important basics.

Constructive Code Reviews

Paul Slaughter: Could you elaborate on what your first code review experience...? What was your initial feeling after receiving that batch of comments?

Adrienne Braganza Tacke: So, my first code review experience was a terrible one. I was a junior developer, a .NET developer, in a healthcare company. I don't know if most people felt like this. I was eager. You know, this is my first real job that wasn't an internship. I was working with other professional software developers. I was learning what the ropes were, in terms of what a professional software development environment was. And I sent my code in for the code review. Luckily, we had a code review process at all. That's another thing we can talk about. But we had the very typical experience, which was, I opened a pull request, or a merge request, prepared my changes, and someone would look at the code, and then someone could leave feedback via comments. So, I opened my first pull request to have my code reviewed. And the comments I got back were not very nice. They were along the lines of, "Why would you do this?" Or, "This is a stupid way to implement this." Or, "This is not the way that things are done." Now, granted, yeah, I was a junior, I was new, I didn't know things. But I expected the feedback to be much more objective or nicer, or at least to tell me what I did wrong. But a lot of the things were just very harsh, and very, you know, they put me down, and I was like, I don't think I should even be here. I questioned even being in the role. But luckily, I was able to get over that. I put other comments there, trying to ask, you know, as a junior, like, "Okay, you're probably right, Mr. Senior. Like, what did I do wrong?" And, "Could you explain what could have been better?"

But it was a very defining experience for code review, because I thought this is what all code review is like, that I dreaded it, because if my code wasn't perfect, then I would be subjected to this kind of review. So, it became this, I need to make sure everything is perfect. And it almost became the goal to not get comments, because I thought if I got any comments, it was always gonna be a bad one, and it always meant I did something wrong. And so, that kind of mindset was obviously not what we wanted. We want it to be a constructive, two-way process. But, yeah. That was the first code review process I had. And I took a lot of those things with me, as I moved from job to job.

Paul Slaughter: I appreciate you sharing that. There's so much I feel like it's interesting to dive into how organizations work from that experience. Having a bad experience like that can very rarely bubble up to where, like, leadership sees that bad experience. And so, what sucks is sometimes engineers have a bad experience, and they just bottle all those negative feelings. What are some of the consequences if that is never addressed in an organization? Or if... What are some of the costs that can show up, and, you know, engineering leadership, that, if their code review process is problematic, they'll see it?

Adrienne Braganza Tacke: Well, definitely, from my specific situation, you have what I carried with me throughout the rest of that job, which is an unnecessary bar that I've set for myself. In one way, it's good because I double, triple, quadruple-checked my code. But it's also bad because I, it's almost facilitating this rubber-stamping, of the "looks good to me," that we want to avoid. And the whole process, if we were to look at it equally, is supposed to be a communicative process. It's supposed to be a collaborative process. We're supposed to find things. That's what it's supposed to do, at least if that's the goal of your team. And so, having that experience, and having that across the entire team, I think, makes the process more rubber-stampy, or, you know, a really bad formality, and not getting the effective use of the code review, as you may want it to be.

Another big problem is, obviously, team morale. If everybody kind of dreads the code review process, because this is what it's like, they will likely not engage in it as much. They'll likely not leave real feedback. You know, that could go the other way, too. Let's say, as someone who is putting their code up to be reviewed, and I am, I would say, over-dramatic about it, or really attached myself to my code, and was not receptive to good feedback, or any feedback, then the person who's reviewing could also just be very superficial in their review. So, again, the process is not as good as it could be, or as effective as it could be, because the two roles that are there, the reviewer and the author, they're not really doing the best or most effective thing that they could be doing there, because everyone's trying to keep face or save face or not hurt anybody's feelings, or the other way around, not try to induce, like, harsh feelings or anger, or have a bad time.

And then I think the other part of that, if it bubbles up, if it even bubbles up, and tech leads or engineering managers actually see that, is that this just continues to be perpetuated. You know, they continue to teach new team members that this is how the process is, and how it always would be, or continues to be. And so, it might never change, because everyone that comes on board is just told "this is the way that it is," and there's no room to refine it or change it or make it better. So, off the top of my head, those are the things that I think are the major consequences for that.

Paul Slaughter: I think you really highlight this if it's not good, you nullify the process. You take away all the benefits. And now, everyone's engaging in this costly process, for no value.

Adrienne Braganza Tacke: Yes.

Paul Slaughter: I think it's also important to highlight if your engineers aren't happy, and having a feeling psychologically safe, you're gonna spend a lot of time hiring new engineers. And that's another cost for your organization that's... And no one wants to spend time doing that. But describing what you described, I think a lot of people can resonate with, hopefully, hopefully, they've found organizations that are more healthy, but maybe they find themselves in an organization that's behaving that way, and they find themselves doing this ritualistic code review, trying to appease some invisible bars. But I love that your book dives into focusing on the purpose of it. It is so adaptive, too, and isn't prescriptive, of, like, finding what's profitable for an organization. If we could, like, go way back in our code review conversation, and if you were writing a, like, overly abstract theoretical textbook, what would you...how would you define the code review?

Adrienne Braganza Tacke: I always like the example of double-checking your stuff to make sure it's right. So, whether you're checking an email, or you're writing a book, and there's someone who's checking the spelling, the grammar, the flow, the copy editing process, even when you're checking any drafts of things, typically, the things that you produce go through iterations. And so, anything, any, especially if it's something smaller that gets joined into something larger. So, to me, a code review, or a review process, is just, you know, covering your own butt, double-checking that everything is right. And also making sure that that knowledge is spread across a team. So, at a very high level, that's how I would describe it.

Recommended talk: Leveraging Multi-Cloud Clusters in Real-World Scenarios • Adrienne Tacke • GOTO 2021

Convincing Skeptics: Why Code Reviews Matter

Paul Slaughter: If I'm, like, team lead, 10X developer, I'm really interested in rearchitecting to this thing, to maximize performance out of whatever... And that's the kind of material I consume myself with. I'm hyper-fixated on technical details. Can you convince me to care about code review? I hear junior engineers complaining about it, and I tell them to suck it up. Can you...how should I...why should I care about it?

Adrienne Braganza Tacke: I get this question all the time, specifically from tech leads, who are like, "Well, I understand the value. And I do have some team members that just go along with the process, because they know that's what the process is. Or they see the value in it, and that's great. But there's always a couple who are really hard to convince because they're just go, go, go, code really fast, code really quickly, and ship." And to one point, you know, that might be the doing of the environment, the organization, where they're like, it's more valuable to ship to customers and to ship value to customers. But I think what people forget, and what I would say to those tech leads to convince them, is start to measure the cost of all the issues that come out of not having a code review process.

So, for example, this one tech lead I spoke to, they did not have a code review process before, which was great for this one person who was like, "Yeah, I can just merge to production all day. It's great." And they would have, they started to record how many times they had production fires, how many times they spent debugging, rolling things back. Those are the things that I would say, those are data-backed reasons that you can show your team, "Look, you can code fast if you want, but make sure we do it in a safe way, and make sure it doesn't degrade the rest of the codebase." So, it's fine if they want to do all of that. But if you show that their current process, either skipping the process, or going around it, or not having a code review at all, seems to result in these types of issues, and you can back that by saying, you know, "This is our change failure rate," or, "You need to compare those things to how many times am I actually producing value, to how many times I might be causing issues or costing the team in a different way."

It's the same for the organization. To an organization who doesn't care about code reviews, they might think the same way. "Well, code reviews are a bottleneck. They slow down the process. They slow down the value to the customer." But, again, if you start to talk to the organization in ways that they understand, "Oh, this last issue that we had cost us X amount of dollars because the site was down because of this bug." If you start to talk to them in ways that they understand, then I think that's when organizations or people start to realize that maybe this thing isn't so bad, and maybe this thing is actually a good thing, and maybe this thing actually makes me and the entire team better. So, appeasing them by, you know, either their ego in that way, because they wanna be better, or basically talking to them in a way that they understand, I think makes it more clear for folks to see that it's a valuable process.

Paul Slaughter: I like to add a small addendum to what you said there, of, it's more than just catching defects in code, but you've mentioned, too, just the knowledge sharing aspect of code review, just breaking the silo of, developer A has built something, and the defect usually doesn't come from developer A, but it's when developer B tried to build something according to the quirky way developer A did it.

Adrienne Braganza Tacke: Right, right.

Paul Slaughter: Code review is one of the best ways of just, how can we all get on the same page, and being able to contribute to each other's stuff.

Adrienne Braganza Tacke: Absolutely.

Recommended talk: Amazing Code Reviews: Creating a Superhero Collective • Alejandro Lujan • GOTO 2019

Measuring Code Review Success

Paul Slaughter: We talked a little bit about code review and how it can go bad. And I think that's coming from, across software land, people always have some bad stories, or cases of this. But trying to be optimistic and positive, how can an organization know that their code review process is effective, and is profitable, that, what are some signs that this is, we're doing this well?

Adrienne Braganza Tacke: I think if a team can quickly turn around their code reviews, the review process itself, and they have it achieve the goals that they want it to, those would be the signs. And I'll dive into that a little bit. So, in the book, I talk about, let's say you're building your first code review process ever, or even just refining your existing one. What do you want it to do? Because it's one thing to just slap a PR on there and say, "We have to do it this way." But it's another to say, "Why are we doing this?" And that's a really big part of also convincing a team, to get that buy-in, why should we do this? So, your goals could be anything from knowledge sharing and transfer, so that the entire team does not get siloed, like you mentioned. It could be having an official record of the evolution of your codebase. So, making sure that every change that goes out, you can trace it exactly back to the exact commit. That might be helpful for your team, depending on what kind of application you're working on, or system you're working on. Sometimes it's mentoring, or catching defects. What your team's goals are, that's the number one thing to set.

Once you've agreed to that as a team, then you can start measuring yourself to say if it's working well or not. Do you happen to find that you're still having one person only review code because they're the only person that knows anything about this part of the codebase? Maybe you're not doing it so well. Maybe you need to tweak the part of your process, and actually involve more people, or have auto-assignments to other people, or have informational reviewers, to start sharing that knowledge. Are there bugs actually being caught? I mean, that could spread into many different things. Are you using automation during development, so that reviewers can spend time on higher-stakes issues. rather than formatting or indenting in the code review? Do you have a proper CI/CD process in place? Because not everything can be found by a human in a code review. It's impossible to expect someone to review that kind of stuff, and find everything in the code review, so that's something to take a look at.

If your issues are your pull requests or your review processes are taking too long, then you start to look at why it is so long? Is it because they're so large, and there's so much code being reviewed? Is it confusing? Are people bottlenecked when it comes to responding? Those are the things that you can start to see, that it may not be working, and then start to change or refine, to, that they do start working. Then you can also start to measure how quickly you can push out that value to your customers. So, one really default example are the DORA metrics. Some people have contentious issues with that, and I agree if you only use that as the one single metric to tell a story, but they can be a good starting point for your team if you have not started collecting any stats on your team or how you're doing the code review process. So, I think those would be a good start to see, are they aligning to the goals that you've set for your team, and what you've wanted the code review process to do or be for your team?

Paul Slaughter: Just to clarify, the DORA metrics don't measure code review process specifically, but code review could be a means to achieving better results in your organization.

Adrienne Braganza Tacke: Correct. Yes.

Paul Slaughter: What are some things that are really important but hard to measure when it comes to code review?

Adrienne Braganza Tacke: That is a great question. Let me think about that. So, what's really hard to measure is, I think, the dynamic of the team, in terms of how open or objective they can be with their feedback. There are ways to say, you know, yeah, you can read the comments, you can see how many comments, you can measure how quickly they answer the comments or address feedback. You can see how quickly they look at something and then merge it onto the codebase, but that doesn't really tell how the person, the reviewer, and the author really interact with each other. I think that's one of the really hard things to measure, because that, again, falls back to my first story, which is, you can have good communication, you can have good communication within your team, or you can have not-so-good communication with your team. Then even broader, it's that psychological safety. How open do you feel to telling your colleague that this is not the right thing to do, but here's the probably better thing to do? Or, how safe do you feel knowing that someone who's giving you feedback is giving it to you because it's the right thing to do for the entire codebase and for the whole team, or because you feel like they just don't like you? So, I think that's a little bit harder to measure, to say the team is still cohesive, the team still likes working with each other, even though they may have to get into some debates about code, and people get very, very heated about code, especially if it's something that they've wrote.

So, that's one thing that's really, really hard to measure. What I can say is, even though there's no number or metric that you can say, you know, yes or no, or good or bad, what I can say is that if you can have a pretty intense debate with your teammate, but then still go out to lunch afterward, I think that's a good sign that you got a good thing going on your team. You not only keep the morale of your team, you keep the cohesiveness of your team, but you also make your codebase better for it, because you get the points from both views, or possibly the whole team, and everything just, that friction tends to result in better code, and better teams.

Paul Slaughter: That's a really good point. I think the ability to disagree and commit, it's just, that's a really necessary sign of a healthy team, if you can find that, hey, people are disagreeing, but still happy to work with each other. I recently received a review from a colleague, and we really didn't agree. And I told them, I don't think...there's...we're at an impasse. Do we need to get someone else in or whatever? And this colleague of mine, I'm gonna drop his name, Chad Woolley. He's a fantastic person I'd like to collaborate with. But props to him for prioritizing efficiency, and just collaboration at that moment. It was like, "No, we can keep going. I'm gonna disagree, but we can keep going." It's like, that's why I like working with you, Chad.

Adrienne Braganza Tacke: That's awesome.

Recommended talk: Reasons & Ways to Improve Code Quality • Venkat Subramaniam • GOTO 2021

Unexpected Lessons Learned from Writing a Book on Code Reviews

Paul Slaughter: I've got so many questions for you. And I'm gonna throw a random one at you.

Adrienne Braganza Tacke: Sure.

Paul Slaughter: What's something unexpected you learned while writing this book? So, you, obviously passionate about code review, that you would even start on this journey, but clearly, through writing it, maybe there are some things that you weren't expecting to learn while putting pen to paper.

Adrienne Braganza Tacke: I mean, this might just show the industries that I'm in, but I was very surprised to learn that folks were still using Team Foundation Server as their version control system.

Paul Slaughter: That's a good thing to learn.

Adrienne Braganza Tacke: I'm so sorry. Because, when I was writing this, it was like, yeah, there's GitLab, there's GitHub, there's Bitbucket. There's, like, this, what I thought was the lowest common denominator of a pull request or merge request or change request, those types of things. And so, my mindset was focused on that. And there have been a couple developer groups that I've spoken to where I've wanted to talk to them about code reviews, what are their experiences like, what are their obstacles like? And they're like, "Well, we agree with everything that you've shared with us, but how do we frame this in the aspect of Team Foundation Server? I'm like...I was caught off guard. And so, that was one thing I was very surprised to learn. I was not as surprised to learn that there are still groups out there that don't have a code review process. That, I wasn't so, unfortunately, surprised to learn, but, yeah. I think that's the most surprising thing, was that.

Paul Slaughter: That's fun... So, to clarify, for those that aren't familiar with TFS, why is that toolset not becoming the code review process you'd expect?

Adrienne Braganza Tacke: This is a little bit different than what we may be used to, where, you know, version control, we have our own... There's less of a likelihood to step on each other's toes. And I've worked with TFS before. So, what happens is, you have the same shared codebase, as a team. And let's say you want to work on a specific JavaScript file. You check that file out, much like you check a book out of a library. And during this time, that file is locked. So, just like you have that physical book with you, that is locked, nobody else can make changes to that file. You are going along your merry way, you make those changes. Then there's the process of checking something back in. And I think what's missing in this, this process, is that there was no good way to really fit a review mechanism as formally as we do in a pull request process, where we say, "Before we contribute or merge it back into the greater codebase, can you double check it before I actually do that?" So, if we were to fit that in, into this process, if there is a mechanism to look at the code before we actually check that back in, like we would check the book back in, then it would still be similar to how we do, you know, code reviews now. But I think because there was just check in, check out, and, like, the next time you look at your codebase, you go, "Oh. My colleague checked out that file yesterday, and now the changes are different." And so, it's a little bit harder to keep track of those, or to insert a review mechanism, unless you do, I guess, a review over the shoulder before you actually check those files back in.

Paul Slaughter: Would you, for those teams that are using something like TFS, and are not receiving the benefits of something like code review, do you think that the cost of them moving to Git, or, let's just be honest, to Git, do you think that the benefits of injecting a review process into their code, and to their process, do you think that that would outweigh the cost of them moving version control systems, and...?

Adrienne Braganza Tacke: I mean, my gut says yes. It depends on the team. But yes, anytime you can insert more safeguards for your team and for your codebase, the better. Of course, it's going to be cumbersome because it's a process change. There are people who are used to the way things are done. It's the same with a team who has never done a code review, and now you're asking them to do code reviews. So, yes, I think it outweighs any negative effects of possibly moving over. And it's still very much encouraged to actually move over to Git, and then start doing the code reviews that way. But in the meantime, at least this is what I ended up saying to those teams that I spoke to that were on TFS, was to at least do a sort of over-the-shoulder review before doing a check-in, to kind of compensate for that part that they don't have.

Recommended talk: Code Complete • Steve McConnell & Jeffrey van Gogh • GOTO 2023

Navigating Code Reviews in Open Source vs Closed Source Projects

Paul Slaughter: What are some of the differences when you're researching code review and how different people do it, and different organizations and projects? Did you ever come across some of the challenges or differences with open source projects versus closed source projects?

Adrienne Braganza Tacke: Yes. Many. So, open source is a different beast because you have a lot more folks that you don't know, that you may not trust, that you basically have to have even more safeguards, and as many checks in place in something that is open source versus closed source. I would argue the only similarities I would see is if you're in a very regulated industry, then you might see the same amount of extra checks or safeguards in your process. But that's the biggest difference I've seen is, you just don't know, because anybody can contribute. Anybody can add to your codebase. When, while that's a great thing, that makes the pull requests, or that little step in between, even more valuable because that could be your only check at all to see if that code is good enough to pass muster, if it's safe, if it's not something malicious. So, you almost over-rely on that step to check that the code is okay. And then of course, ideally, you have a very robust CI/CD process to do some other checks after the fact, maybe do some pre-build checks. But because you're dealing with such a large amount of unknown folks, and anybody can contribute, yes, open source certainly requires the code review process, I would argue.

Recommended talk: Engineering Documentation • Lorna Jane Mitchell • GOTO 2022

Paul Slaughter: At a different perspective, having the privilege of working on a large open source project, I love that the open source community is our primary audience. So, we very rarely are not documenting things. Everything is always, "How can someone with no context come in and contribute?" And if there's anything I could evangelize, it would just be that if closed source projects were thought in that sense, they would find themselves, their future selves, thinking their past selves of, like, "What in the world was I thinking six months ago?" or whatever.

Adrienne Braganza Tacke: I couldn't agree with you more. I think a lot of my book focuses on that, which is to just give as much of that context as possible, so that you set up your colleague for success. And what you said is absolutely true. Because everybody doesn't know everybody, you need to have that context. You need to over-communicate. Someone has never seen this project before, they need to be able to understand, comprehend, and know what to do. What you said sums it up perfectly. If closed source groups and teams could think in that way, I think everybody would be better off for it.

Paul Slaughter: I got a question for you. I wanted to ask a question as a reviewer. And I wanna ask some questions as someone who's writing code to be reviewed. But, as a code reviewer, similar to, like, we look at code in the past that we wrote, like, a year ago, and we're like, "Oh, my gosh. What was I thinking?" Have you ever looked at a review you've given in the past, and it was like, "Oh, my gosh. What was I thinking?"

Adrienne Braganza Tacke: Oh, yes. Yes. This is where some of those lessons have been learned. Specifically, those of being a reviewer. I don't know if folks have had that same experience. I would think so, because we still have some of these issues. But as a reviewer, you're kind of like, "Well, the CI/CD pipelines are gonna catch it." Or, "Oh, you know, it's Friday. I need to go." Or, "Oh, I have PTO. Let me just go." There are a whole bunch of reasons, not good reasons, but sometimes good reasons, where you just kind of skim, or you look at it and you just say, "Looks good to me," right? That's the whole point of the book. And I do cringe, because there are sometimes things that have been missed, where if I had just taken five more minutes, I would have seen a glaring issue there. Or, for those who may not have a CI/CD process in place that's robust enough, sometimes it takes a couple extra minutes to just pull down that version of code, test it locally yourself, and make sure everything is running correctly, and that would have shown that it wasn't running correctly.

So, the reviewer has just as much responsibility as the author. Sometimes...this is something I talk about a lot, which is, we always blame the author. We're like, "Oh, if there's something wrong, it was the author who had a bug in their code. It's not our fault." But I try to change that mindset to say we're just as responsible. We should be thinking, not, oh, it's the developer's fault. We should be thinking, oh, how did I miss that? Because when it comes to our plate, and it's our turn, that's our responsibility. And so, I think changing the mindset of every reviewer to have that would also be very valuable. And there are also reviews I've had where I was extremely nitpicky. So, I've also cringed at that, where, you know, I was one of those who left, like, 30 comments because there's a missing semicolon, and you have a space here, and this was that... You know, all these silly comments, the lower-stakes issues, the kind that we dread, that could be easily fixed with automation during development. These are the things I have also looked back on and cringed at myself for being a terrible reviewer.

Paul Slaughter: But I think there's also just something, because there's something so human about code review, too, where it's not gonna ever be... It's not like a compiler, where it's deterministic of what's gonna happen. Different people are gonna say different things at different times. But it's the human element that makes it important, because we're humans writing code, for now. As a reviewer, if I receive a pull request, and let's say that the author has left a lot of context, and even, like, inline comments, there's past discussion on it, I'm looking at the code... Like, there's a lot of information to receive. I'm looking at the code, and it's not make sense to me. How do I know when...is it because this isn't the right approach? Or did I not put enough time into it?

Adrienne Braganza Tacke: Are you asking in terms of what to think as a reviewer, seeing something like this?

Paul Slaughter: As a reviewer, so... Right. Knowing that there's some element of time you need to give to catch yourself up to the context of a change. But if something's not making sense, or certain approaches are like, I can't understand why this would happen, what do you recommend when someone's there? Maybe the code is problematic, and isn't obvious, needs to be changed. But maybe as the reviewer, I haven't done due diligence. What's the line there look like?

Adrienne Braganza Tacke: The line, to me, is if you have any sort of confusion on your end, it's always best to talk to the author directly. So, if you're privileged enough to be in the office with them, you know, tap on their shoulder real quick or message them, Hey, I'm looking at your PR right now. I'm a little confused about a couple things. Do you mind if we chat about it for a bit? And this solves so many issues, because sometimes you'll see PRs that have discussions that are, like, 67, you know, items long, and it's impossible to get the context there in a meaningful way. If you're the reviewer looking at it at this time, I always err on the side of just communicating with the author. Communicate with the person. It's the same when you're actually leaving feedback. If you find yourself rewriting something, because you're trying to explain, you know, the feedback that you're trying to give, and you can't really put the words into it, that's another sign. Just take it offline. Take the discussion offline. You know, have a video chat, or talk to them in person.

Same thing if you have, I would say, like, five or six comments in, meaning back-and-forth comments, let's say you leave a piece of feedback, author responds and says, "No, this is what I meant," or "This is the intent behind it," and you're still not understanding it, that's also another sign. Have a discussion. All of these things that get lost in translation on the screen and through text, sometimes it's just easier to get it through conversation, with video, with body language, with gestures, and more synchronous communication. And that seems to be a very good way to solve those kinds of issues when it comes to confusion with reviewers.

One thing I'll add to that is, let's say you've had these conversations. Something I write about in the book is to, once you've figured out the outcome of that conversation, please, please, please go back to the pull request and actually, you know, leave a summary. Just quick sentence or two of the outcome of your discussion, because then you don't get back to that siloed knowledge, where you two may have talked about it, you two may have gotten onto the same page, but for anyone else that goes through this, these historical records, or the rest of your team, they should know about it too. It shouldn't be stuck between you two. So, something as simple as going back, saying you've discussed it, this is what the result was, bam, now you have your whole team on the same page.

Paul Slaughter: I love that you highlighted that in your book. I was reading... And also, it caused me to reflect, I've been working at GitLab for a while, that's like, "Oh, yeah. People work in person, and they can just talk to somebody." But I realized... Oh, I really appreciated that you were promoting code reviews being this documentation and artifact of decisions that were made and conversations, and at GitLab, one of our values is transparency. So, that's, this would be one of the ways we achieve that value, is, everything's public, as public as possible, on keeping the code review conversations up-to-date.

For all the software engineers out there that aren't sure about introducing a code review process, that transparency of discussion is so helpful, because I use that in the code a lot. When there's, like, here's some weird code, but here's the context behind it, and this is a link to the pull request conversation, or the merge request conversation, like we call it at GitLab. I love that. And I appreciate that you highlighted that in the book. When you were talking about these discussions, I was like, "Ah, it's not very transparent." And you said, I was like, "You got me. That was good."

Adrienne Braganza Tacke: Thanks, thanks. I like that.

Improving Code Reviews Through Communication

Paul Slaughter: Keeping as the reviewer, I wanna just pause and appreciate how much the reviewer has to keep in mind, of looking at maintainability and security, and all of that, and also having the soft skill of having to package my questions, comments, concerns or praises into communication. It's a huge ask. Do you think it's best when all software engineers are required to do code review? Do you feel like this is something that you can expect from software engineers, that kind of soft skill of communication?

Adrienne Braganza Tacke: Yes. I think anybody that writes and reads code on a team should be expected to have this, because writing code is, like, 5%, 10% of the time. Reading code, comprehending code, understanding it, seeing the edge cases, looking at it in the context of the bigger picture of your system, those are the more important things that we actually spend more time on, should be spending more time on. And because of that, those skills need to be extra, extra good. So, yes. I absolutely think that those skills not only make you a better engineer, because understanding it is one thing. Being able to communicate it, even to another engineer, with the technical experience, with the technical background, is still something that's, you know, is hard to do unless you practice it, unless you refine it, and you exercise it, through something like the code review process. So, I truly believe that it's something like riding a bike, or anything else that you might practice. The more you do it, the more that you exercise it, the better at it you'll get. And it's absolutely something I know, for my team, I would expect those skills to be there in an engineer.

Paul Slaughter: What, if someone realizes that they're really bad at communication skills, what do you recommend they do to improve their soft skills?

Adrienne Braganza Tacke: The first thing I would say is probably try to take a look at existing comments that they've written, and see if there are ways that they could make them clearer, if they could make them more objective. In the book, I write about what terrible comments are, or not-useful comments are. And those are usually very subjective. They are not outcome-focused, and they are unclear. And I think if you take a look back at any of the comments you have left as a reviewer, I'm willing to bet, especially if you've now realized you may not have the greatest communication skills, or it seems like you have a really long discussion, longer than most of your colleagues, take a look at CR PRs, and think, and take a look at what might be perceived as, "that sounds actually really subjective." Like, are you, for example, if you're asking somebody to make a change in one of your comments, do you kind of just say, "Hey, do this?" and then that's it? And then try to read that from the point of view of the author. Does it come across as what you thought it came across as? Or are you now seeing that, oh, this actually might sound like a command, and it actually sounds kind of subjective because I don't have any backing to why I'm asking that person to make that change. And you see a lot through a lot of your old written communication.

So, I think that's a great place to start, is to take a look at those, and then actually take the time to rewrite them. See what would have made them a more objective, a more clear, and a more outcome-focused comment, and then read it back to yourself and say, "As an author receiving this, as someone getting this feedback, I don't think I would have any problem with that," or, "I don't think this could be misconstrued." I think you would start to get the hang of what those types of communication skills might be like. And then, once you've done that, obviously try to apply that in your future pull requests, or future reviews, and take it from there, and it should get better over time.

Paul Slaughter: That chapter you reference in your book, I think is a really great starting point. I really appreciate the objectives that you set for a comment. While we were discussing, I realized that we do something fun at GitLab that, non-intentionally, might have an effect of improving people's communication. We'll pair a lot, on just anybody's work. Sometimes we pair on... I have code reviews, and so we're, like, ma pairing on a code review that I have. And then we're all, like, how are we gonna comment this? It's like, does this sound make sense to everyone? And it's like, "Oh, yeah. That was nicer." So, even just code reviewing with somebody, I think could be a helpful way of improving your soft skills.

Adrienne Braganza Tacke: Absolutely.

Paul Slaughter: All right. Coming up to wrapping things up. If I'm an engineer, and I despise the way our code review process is, it makes me feel bad, and others feel bad, and I know that there's things could be done, or I'm an engineer, and there is no code review process, I've watched this video, and I'm inspired, what can one individual do to make a difference in their organization?

Adrienne Braganza Tacke: What can they do? They could buy my book. No.

Paul Slaughter: Okay.

Adrienne Braganza Tacke: It's the easiest answer, because it has all the answers that I couldn't say in this. But, all jokes aside, the one thing that they could do is start by being the example. So, if you realize that they are bad, terrible, no-good things, you start by doing the code reviews yourself. And yes, this is kind of weird, because, you know, we say you're not supposed to review your own code. But what I mean by that is you put your code up for review, you ask your colleagues, "Hey, can you take a look at my code?" You set up the initial system, and you start to lead by example. And then, every now and then, you'll start to get a couple colleagues, ideally, to buy into that, because they'll say, "Oh, this is actually a great way to, I don't know, mentor, to share knowledge, to get answers to questions that I've had about this particular part of the codebase." But I think that's the one piece of advice I could give to start out, is lead by example. You start it out, and I'm pretty sure you can get your team to come along for the ride.

Paul Slaughter: I think that's great. And even if it doesn't convince everybody, leading by example definitely has positive impacts. Yeah. When do we expect the book to come out?

Adrienne Braganza Tacke: That's a great question. It is still in the final, I think, copy editing stages. So, I myself am waiting for a date. It's slated for October 2024. So...

Paul Slaughter: Nice.

Adrienne Braganza Tacke: ...a specific date, I will absolutely let you know, on all the...

Paul Slaughter: It's probably too late for me to ask for you to add some sections to the code review book. I'm joking.

Adrienne Braganza Tacke: For version two, I'll keep it in mind. Second edition.

Paul Slaughter: How would you compare and contrast this book-writing review process versus the code reviews you've been involved in? Are there practices you think that could be applied across industries?

Adrienne Braganza Tacke: Absolutely. I mean, these, I would argue they're one-for-one, because, as someone writing this book, I'm very attached to what I'm writing. The drafts that I send in, I'm like, "These are perfect. These are great. I'm a wordsmith. Everybody's gonna understand what I meant by this." And so, having this review process in place, not only having multiple people, so, I've had larger, more, I guess, formal review mechanisms, rather than having each, say, page of my book being reviewed, I've had maybe chapters of my book reviewed at once, and having 10 to 12 reviewers look through that. That has been an eye-opening process, because I have now more perspectives to say, this actually didn't make sense to me. This doesn't flow with me. What do you mean by that? There are items I've talked about, that I assumed everybody knew, and they're like, "What is this?" Or, for example, I was talking about Selenium, a testing framework. They're like, "What is Selenium?" So, all of these little pieces of feedback made my book better, because I made it clearer, I made it more succinct, I explained things that were fuzzy or unclear. I took out a lot of things, because I like to, I don't know, I mean, I ramble here, but I write a lot as well. So, making things more to the point. There were a lot of things that were made better through these review processes.

Having a technical reviewer, having someone else focus on the more technical aspects of the parts that could be reviewed in that way, also made it better, because now you get this very targeted perspective and targeted feedback on my book. So, the practices there, I would say, is, the more eyes that can look at your code, the better, the more perspectives that can look at your code, the better. And it's the same. The more objective that you can make your review, the better. I did receive comments that were kind of mean. There were parts where I was trying to make it a little lighthearted, and they're like, "Yeah, this point is useless. Just take it out." And I'm like, "Okay. Tell me why it's useless." Or, "Is it breaking the flow? Is it distracting, because now you lost your train of thought? Does it not relate too much to the chapter? Tell me why." But no, it was just, "This is useless." So, this is the same as those comments that are like, "This could be better." And then you're left to figure out what could be better. So, I would argue, this entire process, especially from the author's perspective, is very much aligned with the code review process, and a lot of what we can do to make them better in the code review absolutely applies in writing a book.

Paul Slaughter: Cool. That's really interesting. I've always been curious about that. And I felt like us software engineers like to over-optimize things. I was like, do other industries ever think about their review processes, and the feelings they leave people? Cool. Well, as a final question, I know we're at time, but Adrienne, where can people find you and follow you, and...?

Adrienne Braganza Tacke: I am, unfortunately, still on Twitter's, X, I think. So, you can find me by my name, Adrienne Braganza Tacke. I'm on LinkedIn, as my full name Adrienne Braganza Tacke. I'm also on Instagram. And then my website. My website's probably the best place, because it's the most updated. So adrienne.io. Really awesome that I was able to get that domain name.

Paul Slaughter: Cool. All right. For the audios, A-D-R-I-E-N-N-E.io.

Adrienne Braganza Tacke: Good looking out.

About the speakers

Paul  Slaughter

Paul Slaughter ( interviewer )

Adrienne Braganza Tacke

Adrienne Braganza Tacke ( author )

Senior Developer Advocate at Cisco & Author of "Looks Good To Me: Constructive Code Reviews"