Home Gotopia Articles Small PRs, Big I...

Small PRs, Big Impact: The Art and Science of Effective Code Reviews

Wonder what's behind those "Looks Good to Me" approvals on your PRs? Listen to Adrienne Braganza and Saša Jurić discuss why smaller, better-organized pull requests lead to more meaningful code reviews and stronger development teams.

Share on:
linkedin facebook
Copied!

About the experts

Saša Jurić

Saša Jurić ( interviewer )

The ultimate beacon in the Elixir space; author of one of the most important books, "Elixir in action" #elixir #erlang

Adrienne Braganza Tacke

Adrienne Braganza Tacke ( expert )

Senior Developer Advocate at Cisco & Author of "Looks Good To Me: Constructive Code Reviews"

Read further

What is Code Review and Why Does It Matter?

w

Adrienne Braganza: Hello, everybody. Welcome to another episode of GOTO Unscripted. This time, I am Adrienne Braganza, author of "Looks Good to Me" and a senior developer advocate at Viam Robotics. And I'm here today with Saša Jurić. Saša, would you introduce yourself?

Saša Jurić: Yes, thank you. My name is Saša Jurić, and I am an independent Elixir mentor, freelancer, helping people work with Elixir in production., I guess the topic that we settled on for today is going to be about pull requests, right? And Adrienne humbly omitted that you wrote a book about pull requests, right?

Adrienne Braganza: I did. I'm very excited about it. It is finally in print. It's called "Looks Good to Me: Constructive Code Reviews." And it's two years of my life that I put into this book, basically, trying to show developers that pull requests or code reviews in general can be so much better than what we may have experienced, so.

Saša Jurić: I had the pleasure of reading the book recently, and I have to say, it's a wonderful book.

Adrienne Braganza: Thank you.

Saša Jurić: It's something that, as far as I know, does not exist in the space of technical literature. So it's pretty much unique and a very much-needed book, right? So the reason I wanted to be a part of this conversation is because, as a part of my job, I use a lot of code reviews and pull requests because I'm not doing, like, regular development of features. I'm working with teams a lot, you know, trying to see what kind of code they write and how they can improve. I find that pull requests are a natural driver of such flow of my job. I've been, actually, I think, in the past five years, doing more code reviews than actually writing code. So I feel like it's a subject that is very close to the heart. Even, you know, doing a lot of that stuff, a lot of code reviews, and even before that as well, I feel that I have learned a bunch of tricks from your book. So, congratulations. One author to another. Excellent job. And I'm very happy for you.

Adrienne Braganza: Thank you so much.

Saša Jurić: So, why don't you tell us? Let's start very lightly. I know that people probably watching this, like, understand, but we need to ease into this. This is an unscripted episode, so we're kind of trying to ease into the topic. So, what is code review anyway? And why should we care, right?

Adrienne Braganza: Yes. So code review, most people are familiar with it via pull requests. But at the lowest common denominator, because it's not always done through pull requests, at the lowest common denominator, a code review is a process that software developers use to inspect each other's code and to make sure it passes a set of agreed-upon standards. So everything there was very ambiguous but on purpose because it's different for every team. And there are teams who review via over-the-shoulder or pair programming. There are teams who don't use any automation at all. Maybe they just do some mob programming, and then the set of agreed-upon standards could be maybe two people reviewing and approving a pull request. Maybe someone just says, "No, this is okay." Maybe it goes through an automated process. There's so many different ways that they can go. So at the lowest common denominator, it's a process. We inspect each other's code, make sure it's good to go.

Now, why is this important? There's a lot of reasons why code reviews are important and good for software development teams, but the two that I like to call out and I wrote about in the book was that, number one, it tends to develop better applications, whether that's because you have another set of eyes looking at the code to maybe point things out that you may not have seen before, whether that's you're running automations to double-check that there are silly things like formatting or syntax issues or something like an early return, things that we may have missed initially, those can be removed, and ideally, we find these things before we deploy this code out. So a lot of the things that we do in code reviews and the things that lead up to code reviews tend to lead to better applications.

The other part of it is an elevated team understanding, and I think that's the more important thing with code reviews. This could be a place where teams can kind of transfer the knowledge about the code base that they're all working on. For some teams, that's the only place that they ever kind of share knowledge and know what's going on in another part of the code base that they may not be touching. So it's a crucial part for people to transfer that knowledge. And also, the more that people get used to it and review others' code and see how other people work, especially on their team, the better they can get the context, the nuance, the history, the decisions behind all the other things that go into actually writing the code. Sometimes you get that in the code review, and that's very, very important for you to not only have a better application but for you to also write better code. Because, as I'm sure we all know, the more parameters we have and the more context we have, usually, we tend to produce better code from that. So those are the two things, the best benefits, I think, that come out of doing code reviews.

Saša Jurić: I would definitely agree with both of those things, especially sharing the knowledge, you know. So I had a lot of experience working on projects without code reviews, right? And there was, like, you know, basically, it was like almost each one for themselves, you know. So, like, I write some of my own helper functions and then I know someone else writes a similar set of those functions, because we don't know that those even exist. And so there will be, like, a lot of overlapping, and nobody really knows anything about anyone else's code. There was no...there was less sense of collective ownership of the code. It was more like, you know, there is my code and your code.

Another thing is that because, of course, again, there were no reviews, we didn't have any sort of, like, shared enforcement of a standard way of writing things. And so, like, you could literally tell by looking at the code who wrote the code, right, without looking at the history, you know. By looking at the variable names or whether they use long functions or short functions, you can be pretty certain, "Okay, this is probably Joe's code," or, "This was written by Alex," or something like that. That's not necessarily good.

Adrienne Braganza: I totally get that, and that's actually part of why I wrote this book, is because sometimes the practices that you experience in your first few jobs, that's kind of the standard that you bring to the rest of your other jobs or your other teams. So if you start out, like you said, and you start out with a place where there's no code reviews and then you go to another one and there are code reviews, hopefully, you see the difference of why it's important and why it's great. And then vice versa, if you start with a place and you have always started with a place with code reviews and some sort of structure and some sign-off conditions that were by the team and then you go to a place where it doesn't happen, it's also very frustrating. And hopefully, you try to share that knowledge and try... You'd be the pioneer and instill the practices on that next team to make the team better.

So a lot of these things that you explained, I can relate with, especially, you kind of know who wrote the code because people can...you know, they have their own kind of signature of how people write code, which is, again, not a not a good thing. If you want to be a cohesive code base and a code base that you can maintain, you should not have these one-off things that you can say, "Oh, only that person can take care of it." It should be synonymous throughout.

Saša Jurić: I have to say, like, basically, about 10 years ago, a little bit more, I switched from one position to another, and in the new position, they were doing the code reviews, right? This was my first proper constant code review, if you will, right? Before that, we would maybe have, like, you know, occasionally reviewing juniors' code and things like that. But this was, like, the first time where we were doing it continuously. I was scared shitless, you know, I'm not going to lie. At that time, I was, like, senior and confident about my code and anything, but still, I was kind of very worried about it, both as the author and as the reviewer. But after, you know, doing this for a while, it became one of my favorite practices, right? But yeah, the thing is, like, for people or teams starting with the code review, like, how can you start with that, you know? So, what should, like, the author do? What should the reviewer do? That is, like, a huge subject that you cover in your book.

Adrienne Braganza: That's a great question, because you'd be surprised at how many people and how many teams I talk to and have worked with that did not have a code review in place. So that's number one. I won't go through it too much here, or maybe I can. I do a TL;DR version. But if you don't have a code review, that's a great place to start. And there's no better place to know, a better thing to discuss with your team than to talk about introducing a code review into your process. So maybe you had an experience like you talked about, Saša Jurić, where, you know, you're overwriting or you're writing the same functions without knowing, and you're wasting time because two people are writing the same thing. Or in my case, there were a lot of silly nitpicks and issues, formatting issues, debates, PRs that go way too long because they are fighting about some standard that they're going to hold on to and this is the hill they're going to die on because this is the right way to write code.

So all of these little things that you think won't cause an argument between your team, like, start talking about that on your team. What are those standards? What does everyone think a code review should be? You'd be surprised at how different these answers are for people. Some developers will have a very stringent idea of a code review, while others are, like, "Mm, I mean, if somebody looks at it and one says it looks good, great." There's very different standards. So I think that's the first place to start, is to talk about, what are those standards? How do you want to review? What are you looking for in a review? What can you automate away?

That's another thing I dedicate a whole chapter to, is a lot of these arguments can be gone because you just put some automation into your process and before the code review, things like spacing, formatting, code standards. Again, a lot of people don't like to hear it because it involves a lot of talking, but these are the things you got to agree upon with your team. Because if you can't agree with those things on your team, then some of these things you might not be able to use. You know, we say, "Use a linter," for example, where a linter involves some rules, and those rules, you all need to agree on. So again, it comes back to why some people can't say, "This rule works. Well, this rule, I'm not going to follow." In order for you to get a cohesive code base, you all need to agree to those standards and to those rules, for example, in the linter.

So once we get past that part, assuming you now have a code review, but now you're like, "Oh, it's not really working for us," or "There are some bottlenecks or there are things we can improve," for the author specifically, be your own first reviewer. I cannot stress this enough. So many folks make their code review process so much longer because they missed something themselves. And what I mean by this is they didn't put enough context in their PR, their title is not descriptive, they're missing a file, they have files that don't belong in the PR, their PR is too large and cannot possibly be reviewed by somebody in a coherent way. There are all these little things that it is the responsibility of the author to create a pull request that is manageable for the reviewer to view. And the better that you have the information, the context, and, you know, just imagine you are going to be the reviewer of that PR, would you be able to review it? Would you need to talk to yourself, call you, ask you questions? Or is everything pretty much there and self-explanatory? You want to get to that state of self-explanatory and have everything in the PR so that the reviewer can do their job.

Another thing for authors, there's a lot of things, but the one thing I'll call out is do not take feedback personally. So a lot, myself included, when I had my first code review, I was very self-conscious and I was very worried because I feel like this is a critique, right? I'm working so hard on this code. I think it's good code. And then I'm just going to get critiqued and shit on. And like, this is terrible, you know. You're not supposed to do this. There's a way to give critique back that's on the reviewer, but if you do have constructive criticism or comments on your review, you know, just as we're asking the reviewer to keep it objective, we also need to be objective and not assume we're being attacked, that we're just getting feedback because it's an opportunity to improve our code. It has nothing to do with us. It just has to do with the code. So that's another thing I say, don't involve yourself in there. 

Recommended talk: "Looks Good to Me" Constructive Code Reviews • Adrienne Braganza Tacke & Paul Slaughter • GOTO 2024

The Art of Code Reviews: Why Small, Structured PRs Matter

Adrienne Braganza: Has any of that ever happened? Before I go into the reviewers, have you felt any of that at all, Saša, in your career?

Saša Jurić: Yes. I think that's... So it feels to me that a lot of the culture in development is that the task of the author is to implement the feature and hand it off, you know, and this needs to...

Adrienne Braganza: Here you go. No longer my problem.

Saša Jurić: Then the reviewer is going to make some comments, and the author is going to address them. And it's all, you know, good, then. This leads to, I think, like, two major problems from my point of view for efficient and nice reviews are over large pull requests and, in general, unorganized pull requests, which are kind of not split into commits, which I think you mentioned as well in your book. You touched on the topic. So we often conflate implementing a feature with a single pull request, and that doesn't really have to be the case, right? So you can split it into multiple pieces. And I usually do this, not upfront, necessarily. I just kind of start coding, and then at some point, I think, "Okay, this seems to be dragging on, and I'm nowhere near finished." And I'm going to look at my history, and I see, what can I carve out and submit for some well-shaped review, you know, so I don't hand you off thousands of lines of code, right?

I think you mentioned something along the lines of a few hundred lines of code, preferably, which is something I agree on. I'm not a fan of these sizes because, obviously, there are going to be exceptions, like you make a global, code-wise, search and replace or something like that, and there are going to be some cases where it makes sense to have a large PR. But mostly, it should be small, something which is reviewable within, like, 5 to 20 minutes, you know, depending on whether you actually have some comments or not. And my main message to the wider community based on my small experience is, like, make small PRs. You know, split the feature into small PRs.

Another one is to organize your commits to tell the story incrementally, right? So this is another thing, you know, just people hand you off this single commit, right, and it tackles five different things, like, you know, adding something, changing the database structure, adding some business logic, doing some refactoring along the way, and it's all kind of crammed into that single thing. And it's very hard to understand, right? So I'm reading, like, a few lines, and they make one type of change. And then I read some other lines of code, and it's another type of change. And then the next file in the same commit is, again, about the first type of change. And my mind blows, you know. So I really cannot fathom all of this.

And what do I do? I do LGTM, or I'm going to focus on nitpicks, but then it becomes very counterproductive because I want to be a constructive reviewer, but then I'm just submitting some unimportant comments because I fail to see this forest for the trees. And so try to organize your commits incrementally. This is, actually, once you get into this habit, it's not so time-consuming. You just kind of are mindful, like, "Okay, device some plan. I'm going to, like, change the database structure, commit, you know, and then I'm going to implement some business logic, the happy path commit, you know. And so I kind of try to make this thin vertical going from the bottom to the interface, you know, just dealing with the smallest subset of the happy path, and then I'm going to start adding, you know, making it wider, adding, like, one edge case, another edge case, you know, with the tests and what-not.

That's one approach. There are, like, a bunch of different approaches. But what I want to have is small commits showing you the story incrementally such that when you read those commits in, say, GitHub or whatever you're using, you see the change usually on a single screen and you look, "Okay, it makes sense." Maybe you type some comments. Then you go next, makes sense. And in my experience, like, it's very straightforward to review such pull requests. You're actually going to be able to provide faster feedback, right? So you're going to be more inclined to actually do the review sooner rather than later. The feedback is going to be of better quality because you're going to be able to focus on important issues. And so it's a win-win situation, right?

And this is how I think, you know, authors should, in big part, think about, you know, making their pull requests presentable. Just like we, we are the authors of the books, right, and I presume, before you submitted for the first review, you did, like, a bunch of re-reads, yourselves, and corrections, just as I did with my book. You know, I think I did at least five passes of each chapter before I would let anyone else even read it, you know. But that's why I got good feedback for the book, and I was able to make it even better owing to the feedback, right? So, does that all make sense?

Adrienne Braganza: Absolutely. I really am happy that you brought up the fact that even you, someone like you, gets lost in a pull request that's too large because it's so much so that you're just like, "Well, I mean, I don't think I could possibly give a thorough review. There's too much to go through here," or you have to try to make sense. You now give extra mental load to the reviewer to say, "How does this make sense? How does this fit?" So instead of focusing their time on reviewing and actually giving some good feedback, if there is any to give now, they're spending the time putting the pieces together, just trying to draw a spider web of connections and trying to understand it in the first place. So I totally agree with you that that is another thing that authors definitely should try to focus on, is the smaller, the better. Obviously, there are exceptions, but even if they are large, that's all the more reason why it's large and why it is the exception.

But other than that, do it incrementally. Do it story-wise. It's not even just for the reviewer. In the future, when you look back at this or you may need to work on this again, you can now see the evolution of your code and fully understand, "Oh, okay, this is how we made this change. This is why we made this change." And it's a little bit easier for you to understand in the future for yourself. So it's not just the reviewer. It's yourself as well. But yeah, everything that we've been talking about is just...it could be summed up with that. Make your reviews smaller. Just make them smaller and it will all be okay.

Saša Jurić: Very important message here is that, like, be aware. I think you mentioned it even in the book, if I remember, that there are, like, more reviewers than there are authors, right? So there is one author, and usually, you would expect at least two approvals in a, at least, moderate team, right? But even if it's just one, still, you might expect multiple people to read this thing. So whatever time the author loses to organize this pull request is going to be saved for multiple people and for the team, in general.

Adrienne Braganza: Absolutely.To the book analogy that you made, when I was going through my chapters, I would go through them, but I would do them in very specific categories, which, to me, is the equivalent of doing small pull requests. So for example, right before I'm about to send my chapter back, I first do one check, spelling and grammar. That's, like, formatting, syntax issues, right? Get those out of the way. Automated, also. I'm not going through them one by one. Automate that stuff. Then, number two, go through and actually read through and read the flow of every single paragraph. Does it make sense? Am I rereading something because it's not making sense? Is it too long? Are there some sentences that are too long? Do those types of checks. Then try to go through, again, are there any code samples or technical things that I'm explaining? Do they make sense? Are they matching?

Then another category is, if I do refer to figures, are the figures there? Are they the correct figures? Are they in the right place? Are they labeled correctly? Is the subtext or subtitle of the figure correct? So doing it and chunking it in that way is so much easier to understand for a person, an author, anyone, a human to do. And it's the same in a code review. When you're able to give the reviewer something small, something concise to focus on, it's much easier for them to critically think about it and look at it and give that feedback that we're all looking for.

Saša Jurić: Exactly.

Recommended talk: Elixir in Action • Sasa Juric & Erik Schoen • GOTO 2020

Effective Code Reviews: The Reviewer’s Role and Responsibility

Adrienne Braganza: So you mentioned that, again, I'll bring up the fact that if it's too large, that we're just like, "Okay, it looks good to me." And I know I've done it. I'll admit it. Before, when I've gotten pull requests that are that long, I'm like, "I trust you. I trust my colleague. It looks good to me." And so that's a great place to bring up what reviewers should do. And even though it's so tempting to do this, the one thing I will say is the one part of it is you should not be afraid to send those large PRs back to the author. That's number one. So if you receive something or you're assigned and there's, like, 100 files or something that's crazy that you know you can't go and you yourself are getting confused, don't be afraid to send it back to the author and be like, "Can you split this up into something more cohesive? I can't understand it." That's fair.

And then, number two, we also have a responsibility as a reviewer to actually review the pull requests. So a lot of folks are just like, "If it passes through my review but there's a bug, it's the developer's fault." But nobody ever thinks, "How did I miss that as a reviewer?" And I think that's a mindset shift for reviewers to try to switch to because if we're going to talk about collective ownership of a code base, then that's part of it. That's not blaming the developer because you did a really crappy review or a quick review, or you didn't even look at it and just rubber-stamped it through. So that's now on you if something does go wrong. So that's one thing reviewers can be doing better.

The other thing I will say and I talk about extensively in the book, so much I dedicated a whole chapter to it, is how to write comments effectively. And to do comments effectively, you need to be objective, you need to have an outcome focus, and you need to be specific. I don't know how many times I've gotten comments where it's like, "Oh, I think there's something wrong here," and that's the only comment. I'm like, "What am I supposed to do with this as an author?" Now, I'm spending time searching through, what do you want me to do? Other times, I've had the comments where they're really mean, like, "Oh, this is an idiotic way to do this."

There are better ways to give constructive feedback. Or sometimes people try to give their version of, "Oh, I would have implemented it this way." And if you only give your way without any other reasoning, then, as an author, I have full rein to be like, "Why is your way better than my way? You didn't give me any objective reasoning why your way is better." Is it faster? Does it align with code standards better? Do we do this in other parts of the code base? Give me something that helps justify why your way is better. Otherwise, it's just very subjective that you're pointing something out.

So those are two things that I seem to find myself talking about a lot when it comes to reviewers, is you have a responsibility too, and please, just be nice to each other and give some justification when you're leaving feedback on a pull request.

Saša Jurić: The snarky comments are probably, like, the pain point number one, right, of all the reviews. And I'm hoping I didn't make those at all, hopefully, or very little, trying to be nice. And I think it really, really helps because, when you get, like, that kind of comment, it really hurts you, and it's kind of hard to focus on, right? I do, occasionally, and I'm going to have to think about it, I do this thing of, like, "I would have done this like that," although my job is kind of a mentoring and trying to teach people a different way of using Elixir as a sort of a specialist of the language. So that's part of the reason.

Adrienne Braganza: That's fair.

Recommended talk: Amazing Code Reviews: Creating a Superhero Collective • Alejandro Lujan • GOTO 2019

Code Review Clarity: Tagging Comments for Better Collaboration

Saša Jurić: But when I write that kind of comment, I typically say, "We should get back, I think. Let's circle back on this thing." So I try to say, "Okay, this is not mandatory. Your way is fine as well. This is how I would have done it." And so you can maybe take a pic, like I'm just sharing it out there because... And this is kind of a good question, really. Should I only leave comments which are completely actionable? So, like, you will solve them or you will say like, "Oh, no, what you think is an issue is not an issue because of that." But should I only leave those kind of comments, or is it fine if I leave some kind of side comments which kind of adds to the discussion?

Adrienne Braganza: That is a great question, and I think that depends on the team. So in my opinion, this is where the things I call comment signals come into play, and that is basically labeling your comments to classify what type of action the author should take. So for example, on my team, we did "needs work," which means, "This absolutely needs to change. This is an actionable comment. You need to take care of this." And then there's an optional comment or possibility or...what was the tag that we called it? It was, oh, level up. Level up was a comment that we added, which you don't need to do right now, but it could improve the code base at a later time. So those were the ones that we used on my team.

So to answer your question, no, it doesn't always have to be actionable at the moment. And like we said, this is a place of knowledge transfer and knowledge sharing. So if you have something to share with the team that is applicable, just make sure you are specific about it, whether or not you need to act on it right then and there. Another way that you can classify comments like that are non-blocking versus blocking comments. So you basically say blocking and then you put your comment, like, something very serious needs to be changed, you're not going to approve it. And then non-blocking, which, again, that would fall under something like that, something that you would want to share that could be helpful for the author. This is also very great for juniors or, in your case, because you are being asked to in a consultant capacity, it makes sense to make comments like that. But as long as they know what kind of comment it is, I think that's the more important part.

Saša Jurić: This is something that I've been thinking about myself and I wasn't aware of, but, of course, it makes sense that you have this kind of practice of tagging the comments, right? Because what I would do is I would make 20 comments, and of those, maybe, say, 3 or 4 are really, let's call them mandatory or something that needs addressing. But then I would, as a reviewer, have to go through the list of all these comments and do the summary of, "This is what you actually have to address and the rest is optional," which, thinking back, I just should have made it immediately in the comment like, "Okay, this is an issue, and this is kind of as an aside, so do what you will."

Adrienne Braganza: I'm glad you still made a summary though. That's nice. See, if you put in some extra effort, other folks don't do that. Because when you think about it, let's say you receive 20 comments as an author and they're not tagged, you don't know. Now you have to go like, "Oh, crap, now I have to go through and fix all of this." I know I always had a heart attack whenever I would get a bunch of comments on my pull request, because I automatically assumed that every single one meant I had to do something, I had to fix something. But in reality, like you said, there's only maybe two or three real actionable things that had to be done. The rest were just like, "Hey, this is good to know," or, "Hey, this is something, an opportunity to improve for the future. You can do it at a later time."

Saša Jurić: This can actually lead to a feature creep or a scope creep, right? I think you actually mentioned this as one possible pain point. And I think I had this situation a few times in my career, both as the author and as the reviewer, which I probably shouldn't talk about publicly but here we are, that you just add some comments and then the author addresses those comments, and it kind of drags on and on and on and just starts being this monstrosity of a pull request. So that's one issue that can happen with unfocused review.

Handling Disagreements

Adrienne Braganza: Absolutely. The thing is, when it starts to go even just five, six comments back and forth in a thread between author and reviewer, that's where I'm like, just take it offline. Give a quick call. If you're in-person, tap them on the shoulder and be like, "Hey, let's just talk this out really quick." Or do a video call if you're remote. But almost always, you will get more context, more resolution, much faster if you just talk synchronously rather than asynchronously. Sometimes this is not possible. Sometimes your teammates are in a time zone that doesn't allow you to do that. But I still think it is worth it to try to make that time to talk synchronously if you're just going over each other's heads and it doesn't make sense to you.

Another one is if you have a comment and if you find yourself as a reviewer and you're writing a novel in one of your comments, that's a sign to just be like, don't try to put everything in your head in there. Just, again, do a call, give a call, do it synchronously, because it's usually too complex. There's too much context that you're trying to fit, and it's just better to have that conversation.

Then the other thing I'll add to that is anytime you have an offline conversation, please, please, come back to the PR and just add another comment that says, "What did you decide? What was the outcome of your conversation?" Because that helps so much in knowledge transfer. And if you talk about that, you and me, for example, and we don't add a comment back on our pull request, the rest of our team is not going to know what we talked about. The rest of our team is not going to understand, "Oh, that's why that decision was made because Saša Jurić and Adrienne Braganza talked about it offline and this is the conclusion of that conversation. So just adding a simple comment at the end that says, "Hey, Saša and I spoke offline. We realized we didn't actually have to do it in this way, so we're removing it because of X, Y, Z." Done. Now the whole team is on the same page as you, and you do a better job of eliminating those knowledge silos that tend to happen.

Saša Jurić: Another, I think, useful technique for when you have these debates, which inevitably happen, right, is to try to pair and solve together, right? Because when we are debating on a pull request, then we are kind of in a sort of a power struggle, you know, and trying to enforce my way and you try to enforce your way, right? But when we pair, then we actually have a common goal of solving this thing, right? And I think that kind of shifts the perspective, and all of a sudden, you know, we actually start working together to make it happen, right? So I find that sometimes it can help.

Adrienne Braganza: I find that when you are having that argument, especially when, for example, a common thing is you're arguing because you think your implementation is actually more clear and then the other person thinks, "Actually, no, I think mine is more clear or more straightforward," so then you start pairing, and typically, what happens is you end up finding a better solution, like a middle ground or a completely new solution that you both develop together, and now you're both happy and you come up with even better implementation. And so I do like that a lot as a solution because now instead of just arguing it out and debating it out here, you know, work together. You're two adults. You're two highly paid developers. You can pair-program and find something together and develop something together.

Recommended talk: A Path to Better Programming • Robert "Uncle Bob" Martin & Allen Holub • GOTO 2021

Pair Programming vs Code Reviews

Saša Jurić: Exactly. And I just want to say this, so I know there's, like, a camp of people who suggest that pairing can replace reviews. And again, this is something you touch on in your book as well. It's a very complete treatment of the whole topic, right? I don't think that pairing can replace reviews. I think it's a different kind of technique that can complement them, right? And this would be one great example where it can literally enhance or unblock the process, if you will, right?

Adrienne Braganza: A lot of people, the camp that does say it can replace it, they tend to like it because they do review, right? They do a faster iterative review on each other when they're pairing. And so, in a sense, yeah, you still do the same things. You have the review mechanism in a faster way. But the same problems still lie because now the context and the nuance is now just between two people instead of one. So again, if you just completely replace code reviews and you only say, "Oh, you know, one other person, the person I'm pairing with, says it looks good," well, yeah, they've been in the weeds with you. They only have that same narrow scope as you do.

Saša Jurić: Exactly.

Adrienne Braganza: So you still need fresh eyes to look at it. And then the last part that's a hill I will die on is code reviews are a very good place to document what is happening with your code base. So even if you do the pair programming and you do the review in that way, just make your code review a little bit smaller and less formal. Just have other folks take a look at it and see if there's anything to leave feedback on, and that way, you still get those benefits of knowledge sharing, knowledge transfer, and documentation of what's happening in your code.

Saša Jurić: I would also add that, to me, pairing means authoring. So the pair, both people are the authors. They are reviewing themselves, but it's a form of self-review, so to speak, and it's bound to, I think, suffer from the author's bias, right, because things are much clearer to the author than to other people looking around. I don't think it's going to be as efficient as someone looking at the code who was not involved in the process of making that code, right? And this is, to me, actually the main benefit or the thing that I want from a review, the very first thing is, can I understand this code? Because if I cannot understand it, then I cannot find any problems with it anyway. And so any other benefits kind of fall if I don't even understand what this code is about. To do this properly, you need a fresh pair of eyes, right? You need someone who was not involved in the process of making that code.

Adrienne Braganza: Totally.

AI and Code Reviews

Adrienne Braganza: This is a good segue. Maybe the last thing we can talk about is you can even have AI take a look at your code, maybe do a very superficial first-pass review. So I know a lot of folks are probably thinking about AI and if it can do code reviews for us. There are already tools out there that are reporting this. And some of them that I have seen, they do a good job of summarizing. They do a good job of code suggestions, especially if you let them learn how your team works and what type of fixes your team usually suggests. So it starts to learn what your team would typically suggest for certain problems. But I will say, for anybody listening at this moment in time, it's still January 2025, I do not think AI can replace code reviews.

It can help. It can do all of the mundane stuff. Have it summarize your PR for you. So all that context we want you to add as an author, let AI write that for you. Have it write a title. Have it enforce that the titles are all the same on your PRs so that you can do some automation on your CI/CD builds. Those are the things that I think AI can definitely do. But I can't understand the nuance just yet. It can't understand your team and how it may operate what your project's quirks are. It can get there. I'm not saying it will never be able to do that. But at the current state, I still don't think that is possible.

The last part I'll say about that is, especially now because more and more folks are using AI to write code, I think it's even more imperative that we do code reviews. So take advantage of having AI tools help you author the code and write it, but know that your human eye is more needed than ever to take a look at that code and to make sure it's actually right, to make sure it's not hallucinating, to make sure it doesn't have some very glaring security issues in it. These are the things that we are definitely going to need to do now more than ever.

Saša Jurić: This is a great point really. I feel, these days, that I have to review a lot of the machine-generated stuff, right? So as software developers, we are going to have to read more and more. Maybe that's not a bad thing, though. We're going to have to learn how to actually understand the details and the ins and outs of someone else's code, and maybe that's going to make us better reviewers. What do you think?

Adrienne Braganza: I hope so. My hope is that code reviews help us actually become more critical thinkers because we have to review so much more. I mean, we read so much code. But that is my hope, that we become better critical thinkers and we read a lot more.

Saša Jurić: Okay. So on that thought, shall we wrap it up? What do you say? We don't have a script, as the title says.

Adrienne Braganza: This was really, really great. I appreciate hearing your experiences, especially in your capacity as a consultant and freelance mentor. I really enjoyed talking about this. Hopefully, we can do part two. There's so much more that we could talk about that we haven't even touched on. But, yeah.

Saša Jurić: I have to say the time flew by too fast. I was hoping we were going to be able to touch on some other things as well. It's a very fascinating topic. And as I said, for me, it became one of my favorite programming practices that I've been doing, and I started doing it somewhere in the middle of my career. So at that point, I already tried a bunch of different stuff. I'm a huge fan of code reviews. When done right, they are really pleasant. And your book, Adrienne Braganza, actually tells us how to do them right. So it's full of a lot of practical advice. It's a very complete treatment of the subject. So I was really amazed at how many different areas you covered in there, including even the AI at the very end, right? So I totally recommend it. I'm not invested in any way in this book. I just read it, and I think it's a great book. So I totally recommend it to anyone. And I wish you best of luck. And to all people watching this and listening, do the code reviews, do them well, and if you have problems, ask Adrienne Braganza about how to fix them.

Adrienne Braganza: Thank you so much, Saša Jurić. That means a lot. Thank you. And, yes, please ask me the questions. I want to make code reviews better for everybody.

Saša Jurić: Thank you for doing that work.