Home Gotopia Articles The End of Engin...

The End of Engineering's Blank Check: Accountability in Software Leadership

"CTOs must evolve from technical experts to business leaders. Laura Tacho on bridging the gap between code and business impact.

Share on:
linkedin facebook
Copied!

About the experts

Charles Humble

Charles Humble ( interviewer )

Freelance Techie, Podcaster, Editor, Author & Consultant

Laura Tacho

Laura Tacho ( expert )

Read further

Introduction and Role Overview

Charles Humble: Hello and welcome to this episode of GOTO Unscripted. I'm Charles Humble. I'm a freelance techie, editor, author, and consultant, and this is part of a series of podcasts that I'm doing for GOTO Conferences talking to software engineering leaders. Today we're joined by Laura Tacho, CTO at DX. She's been building developer tools and working on improving developer productivity for over ten years, all the way from the heyday of infrastructure as a service and platform as a service on cloud through Docker and containers, CI/CD, and now as part of DX. She's also an executive coach for engineering leaders and an expert in building world-class engineering organizations that consistently deliver outstanding results. Laura has coached CTOs and other engineering leaders, from startups to the Fortune 500, and also facilitates a popular course on metrics and engineering team performance. Laura, welcome to the show.

Laura Tacho: Thanks for having me, Charles, and looking forward to our conversation today.

Charles Humble: Tell me about your role and what you do day to day as a CTO.

Laura Tacho: CTO roles can be really different from company to company. DX is a very unique company. We're an engineering intelligence platform. We're out there to help our customers measure and most importantly, improve developer productivity. On any given day, I spend most of my time actually externally facing as a partner to our customers. That might look like me doing an executive alignment session with VPs or it could look like me having a CTO conversation or coaching some of our champions on how to refine their messaging, talking up to their executive team or their board. There's a lot-it's very different day to day. I don't have a typical schedule, which I really enjoy, and I really like the ability of getting to spend a lot of my time working with our customers and learning about their problems, because that helps us build a better product.

Charles Humble: Is this your first CTO role? Have you done it elsewhere before?

Laura Tacho: This is my first CTO role. Before I moved traditionally through director of engineering, senior director, was VP at a few companies and then moved into a CTO.

Charles Humble: What do you like about the role? Do you enjoy doing it and what is it you enjoy about it?

Laura Tacho: As I mentioned, CTO roles can be really variable from company to company. Some CTOs are focused fully internally on managing the team, building the team from the inside. Some CTOs are focused more externally. What I like about this particular role is that I have a lot of autonomy to do improvements on the inside. But most importantly, I really enjoy being able to be a partner for our customers and use the decade plus that I've been spending on developer productivity to help them have some shortcuts. Not make the same mistakes twice if they don't need to. That gives me a lot of satisfaction.

Charles Humble: As well as your role as CTO at DX, you also coach other CTOs. How did you get into that?

Laura Tacho: I left my cushy corporate VP of engineering role during the pandemic and decided that I wanted to do something different, which I know is a risky move. But it felt like the time was right. I was feeling like I wanted to scale my impact, and I knew that one way to do that would be to work with people at many different organizations. Throughout my engineering leadership career, it had become abundantly clear that coaching and supporting other leaders in their own leadership journey was a skill that I'm particularly good at, and that is somewhat uncommon, especially to be focused on engineering leadership coaching. I built my MVP, got proof from the market that this was something that had product market fit and decided to build up a coaching practice. At the same time, I built up a course on developer productivity, measuring developer productivity, improving it, and those two things together got me into a lot of different organizations working with my clients and doing a lot of breadth, when it comes to seeing lots of different problems across the industry right now, which was really rewarding to help those leaders find their voice, find their solutions. But also for me personally to complete my encyclopedia of knowledge and keep me up to date on the challenges that real leaders are facing right now so that I can better serve them in the future.

Recommended talk: Structures Shape Results: Software Insights • Elisabeth Hendrickson & Charles Humble • GOTO 2024

Common Leadership Challenges and Skills Gaps

Charles Humble: What are the skills that you see that are most commonly missing from the CTOs that you coach?

Laura Tacho: Most CTOs and most leaders in general have a deficit of practice when it comes to what you might think of foundational leadership skills, things like giving difficult feedback, setting clear expectations. There's a fine line, and I think the line is particularly sensitive and maybe particularly small in engineering between micromanagement and setting clear expectations. Most of the CTOs and VPs and other senior leaders that I work with, their biggest fear is becoming a micromanager, and because we have that fear, we tend to pull back when it comes to giving direction, giving necessary detail and this leads to what I call the micromanagement spiral of doom. Because we don't give our teams enough context, then what happens is they underperform or they don't perform to our expectations. So we have to go and get in the details. That's micromanagement that we were trying to avoid. We feel bad about that-doesn't feel good for anyone involved. Then the next time we have an opportunity, we pull back and the cycle repeats itself. This is a fundamental skill that most CTOs who are new to their role, and even CTOs who have been doing their role for 20 plus years, they don't recognize that pattern, and they've actually never been taught how to set a really clear expectation so that people do what you need them to do on the first try without you having to intervene.

Charles Humble: That all sounds very familiar. Trying to identify-trying to give people-that's the direction we're going, and then stepping back and letting them find a path, and it's a hard thing to do. I think as well as that, that shift from particularly if you're technical as I was, and you oversee-I think that shift into management broadly is quite hard because you suddenly find all the things you were good at before and got you where you are are suddenly not the things that really matter anymore. Do you have any advice for people that are feeling that way?

Laura Tacho: Any CTO who's moving into their new role and having the self-awareness of, hey, what got me here isn't going to get me there, that's really important. It's your metamorphosis. Your evolution as a leader, one of the best pieces of advice that I can give leaders in that role is that you have to think of yourself now, not as an engineering leader, even though that's your area of specialty. You are a business leader. Until you fully embrace the fact that you are operating on a business level, you're going to hold yourself back because you're going to stay thinking about the engineering side of things and not apply your leadership to the whole business, which is what the business needs and expects of you.

The sooner you can overcome that hurdle-and a lot of it is putting our ego aside because we've gotten praise, money, celebration based on our technical skills up until this point, and then suddenly to be in an environment where that is no longer going to give you the praise and the validation that you seek. Not only that, but it actually works against you when you do those things. That is really difficult on a human level. It really rattles your whole identity. There's a bit of self-acceptance that needs to happen, introspection and figuring out who am I? It can be a very jolting transition, even if it feels like a very natural next career step. Give yourself time to go through that. Let yourself sort of grieve the brilliant technologist that you were before, and understand that you are now entering-and it's not a new chapter. It's like a new book. Be okay with that and recognize that that's going to happen. Ideally, you have this dialog with yourself before taking the role, but I think a lot of CTOs don't quite understand the gravity of the change, or even moving into a VP role until you've already gotten into that role and you're starting to feel some of those pains.

Charles Humble: A related question, I guess, is how do you manage all your old technical, your old programming skills to try and limit the atrophy? Or do you just accept that they are going to atrophy and that's the reality of it?

Laura Tacho: You will not find me coding something that's going to production. I'm definitely not. Every CTO role is different and every company is different. Some companies that are very big have huge engineering headcounts, have a CTO that codes, and that's okay if that's the contract that you have. For me, in my particular role right now, that's not part of the contract that I have. That's not the interface that we have. I would definitely be a bottleneck. That was also the case in my VP roles. I am not a hands-on coding type of leader.

You do, though, need to keep your technical skills up to date because it's very difficult to lead a team if you don't have any expertise in what they're doing anymore. Two things about that: one, the principles more or less stay the same. The physics of software development stay the same even if the tools are new. As long as you have a very good, solid technical foundation, a lot of those things aren't going to change from month to month. The second thing is do enough on your own-it's a side project, it's reading documentation, it's watching conference talks, whatever it might be-to make sure that you have the necessary technical skill required to do your job, to evaluate performance, to make technical decisions, to guide technical decision making if that's your flavor. You don't need to be coding every day, but you can't forget that that part of you exists either.

Charles Humble: There's something else which I've struggled with and I think a lot of people struggle with, which is that when you're programming, you have this very immediate feedback and reward thing-you sort of touched on this-that kind of instant dopamine hit that you got. When you start managing for a living that really isn't there anymore. Do you have advice on that?

Laura Tacho: Actually, anecdotally, this happened to me last week where something happened at work and I said, oh, finally it came true. The thing that I thought was going to happen. I went back to Slack and it was like a year and two months ago that I said, okay, we're going to do this and this is going to happen, but it's probably going to take a year. I was right. But man, it is difficult when you're used to doing something, running your tests, they’re green, pushing it up to your main branch, running all your CI integrations, shipping it to production. As an individual contributor, I was shipping code to production user-facing stuff six, seven times a day. Now I make a decision that takes me a year and two months to figure out if I was right or not.

That's part of the game. That's also part of the mindset shift of understanding you're a business leader now. One of the common things I hear is, I can't-how am I supposed to get work done because now I'm in all these meetings? The meetings are the work. The meetings are the work. Once we can start to understand that, it relieves a lot of this pressure that we put on ourselves to be a different person or that we're doing something wrong. We're not doing anything wrong. Your role is different.

I also find those moments in the day that you can do something-filling a box on a spreadsheet, color or post on LinkedIn-that gives nice immediate gratification. There's definitely things in your day to give you that gratification. What's helped me is understanding the impact of the decisions that I'm making are incredibly huge. They take time, but I can break them down into smaller milestones so that I can still make sure I'm on track without having to put something out there into the void and hope a year and two months later that the thing actually happened the way I thought it would.

Recommended talk: How to Take Great Engineers & Make Them Great Technical Leaders • Courtney Hemphill • GOTO 2017

Measuring Team and Manager Performance

Charles Humble: In these more senior roles, particularly if we have a lot of managerial reports, when we get started as a manager of managers role, we tend to need the kind of signals to understand how one of our particular teams is performing. Change failure rate might be an example of that. How do you identify the signals?

Laura Tacho: That is a very thoughtful question. I think there's two ways that I generally have seen it done. One is that we have a common language, and we have a desire for consistency and reporting from the top down. This is a situation where every team is responsible for-you mentioned change failure rate-so maybe they're looking at DORA metrics or they're using a different framework. But we want to be looking at every service or every team depending on where the lines fall based on looking at them on the same criteria, so that I can see if something is amiss. That's definitely one strategy.

I've also worked in different scenarios where the teams themselves are highly variable, or they're working in very highly specialized engineering functions. One example, a client that I had, they're working on computations for figuring out the force that waves crashing against solar farms in the middle of the ocean, like, what's the force that those waves crash and how does that impact structural engineering? This is really different from a JavaScript application that's going out. You've got lots of different types of engineering and complexities and software in the same company. For that situation, the approach that not just this company, but many have taken is each team is responsible for defining productivity as it is important to them and useful to them to improve and then their own metrics and reporting on that. Depending on your situation, you might find that one or the other is more appropriate for the particular goals that you have.

Charles Humble: In the same way, I think that those signals can change depending on where you are in the organization, but they can also change over time, like at different phases within the business as well.

Laura Tacho: I do see that happen as well. The metrics and the signal of how is this team performing, how is this component or system performing, it does somewhat follow a trajectory of data literacy and continuous improvement mastery. Teams that are getting started with continuous improvement might be looking at a really granular, small focus of data, because that's all they have capacity to interpret and act upon. We might build that up over the course of six, twelve, eighteen months to something more robust and standardized across the whole organization. That's very common, especially also to see it happen only in a subsection of teams or a subset of teams. Maybe there's two teams who are the flagship teams for continuous improvement and using metrics for productivity and improving productivity. We can see them have some success and that might snowball a bit and then fan out into the other teams until we have a whole organization who's able to exploit the knowledge and lessons that that team has learned to bring those habits and processes to the rest of the organization.

Recommended talk: How to Deal with Software Complexity • Gail Murphy & Charles Humble • GOTO 2024

Understanding Developer Productivity and the DX Core 4 Framework

Charles Humble: There's something we talk about a lot in our industry, this idea of developer productivity or what that means and how we measure it. I had the pleasure of interviewing Dr. Gail Murphy for this show. We had quite a detailed discussion about this. Her work's been hugely influential on me, but because I want to talk about the DX Core 4 framework and the metrics for that, can we maybe start with what your definition of what developer productivity actually is?

Laura Tacho: Developer productivity is actually very difficult to define. We've had research trying to define it since the beginning of software development. I think as Gail mentioned in her interview, we used to define it based on lines of code. My own definition, which I don't like to have my own definition for things-I want to have a common language that we use as an industry-I'll fall back to Core Four, because that really is my thinking codified in a framework, which is that developer productivity first is multidimensional. It is about output that leads to outcomes. It's about quality, it's about the developer experience, but it's also about business impact. It has to be about those things together. Otherwise we get too fixated on one part and we can lose sight of actually the whole system.

Developer productivity is a team sport and it is very complex. We want to have a definition that is sufficiently complex to encapsulate it, which unfortunately makes it sometimes difficult to reason about, and difficult to work with. But one of our hopes of the Core Four was that we were simplifying and unifying, and we have seen that it does help organizations get their head around the definition of developer productivity, give common language to all of their leaders in the organization, and skip that debate "what is developer productivity?" at the beginning and get to let's try to start improving it. Here are some tools to do that.

Charles Humble: It's interesting you talk about that common language, because I think it's something that gets missed a lot is that the words really matter because they shape how you think about something. I think it's actually quite an important point that people overlook a lot.

Laura Tacho: Yes.

Charles Humble: Tell me about-so you've recently released the DX Core Four framework. Can you tell me a bit about this framework? How did you build it in the first place?

Laura Tacho: It's a very brief tour through developer productivity research. As Gail mentioned in her interview with you, we've been trying to measure developer productivity since the beginning of software. We started with some very straightforward measurements, lines of code, for example. I think where we landed a decade and a half ago was perhaps this is not the best thing. There's variance in programming languages. There's a lot of variance in the types of tasks that engineers are responsible for. Perhaps this is not the best thing.

About a decade ago, DORA came on the scene with these four key DORA metrics. These have been adopted as somewhat like the canon of objective measurement of organizational performance. That got interpreted as these are developer productivity metrics. DORA was really meant to measure software delivery capabilities, not necessarily widespread developer productivity. But in the absence of a better alternative, DORA was a great one. Often we latched on to that as, hey, these metrics are really useful. We're going to use them, but there is still quite a lot of the story to tell.

Dr. Nicole Forsgren and some of the other researchers involved in DORA then came together to do research around the SPACE framework of developer productivity. SPACE is a very comprehensive framework. It is not a recommendation of what to measure, it describes the components or the dimensions of developer productivity. One of the main theses of SPACE was that activity is for a part. It's only one fifth and there's satisfaction, there's communication, there's lots of other things. Also, self-report or self perception of productivity is really important. We're starting to get a more nuanced definition, a more comprehensive definition.

Then the DevEx framework came on the scene again from that same research team, talking about cognitive load flow state and feedback loops as the components to a really good developer experience, which leads to great organizational outcomes.

There's so much research out there. Despite all of the great research, we were still being asked the question, what should I even measure? Abi Noda, who is the CEO and co-founder of DX, and I came together and we said, what can we do to simplify this, to unify these frameworks so that people didn't have to sit there stalled for three months when they wanted to get started with this so they could pick up something that was well tested and well researched and get started quickly. 

That's really what the Core Four was born out of. A desire to facilitate, to encourage leaders or to give them something that they could start using immediately. That's easy to deploy. That is research backed, evidence based and tested in the field. We came up with the Core Four, which unifies all of these frameworks together so that leaders don't have to spend three, six months trying to figure out which metric should I pick. They can get started with the improvement part because that's what we're after anyway.

Charles Humble: Let's dig into the framework a bit. So it has four areas. You have speed, effectiveness, quality and impact. And then they have key and secondary metrics for each one of those. Can you talk a little bit about some of the different metrics for each of those categories and how they were arrived at?

Laura Tacho: I will talk about the key metrics for each of them. Speed, effectiveness, quality and impact. The categories here really hold each other in tension. It's important that when we look at Core Four, we're looking at all of the metrics together. You can't look at one value from one of the categories in isolation because it doesn't mean much in isolation.

Starting with speed, this is actually the most controversial one. We're measuring this in diffs per engineer. So pull requests/merge requests per engineer not at an individual level. This is incredibly important. Not at the individual level but in aggregate. This is a metric that is actually myself-you can find a conference talk of Abi on stage telling people not to use this metric. I have definitely said the same thing in the past as well, because the risk of misuse for this particular metric is very high because it automatically people go to, okay, Charles had four PRs this last week and so now he has three this week. What's wrong? Are you slacking? Are you on vacation or whatever?

But when we look at it as a system metric, as a measurement that answers the question, hey, how easy is it for engineers to get work done in the development system that they have access to at this particular company? It is so useful, and this has been shown in research at Microsoft. Meta uses this. Uber, I believe, as well. The benefit outweighs the risk. As long as we felt that we could sufficiently educate people around how to use this. Putting it in the boundaries of a framework also was something. The authors ourselves, we debated about, is this the right metric? Is it not? One of the things that we wanted to include in the framework also were peer benchmarks. Benchmarkability and ease of measurement were also things that we considered when choosing each metric. Diffs per engineer fits both of those criteria. It's a very useful metric for system performance. That is the speed part. I'll reiterate not at an individual level. I'm not interested how many PRs you closed last week.

Charles Humble: I think it's really important to highlight the issue because it's one of those things. If you do that at an individual level, it's so easy for developers to game. If they think that they're being measured on that, well, I'll submit more PRs. That's the problem. I think with all of these things, you have to measure at an overall team level. The other thing that you said that I want to pull out and highlight is that some of your metrics that you've chosen in this are effectively in tension with each other. They act as a bit of a self-balancing force on the overall framework. Is that right? Can you talk a little about that?

Laura Tacho: Let's take the next two effectiveness and quality into consideration. Effectiveness is a measure of how effective engineers can be in the systems in which they work. This is measured by developer experience. Developer Experience Index is an average of some developer experience drivers. That gives you an indication of how good your developer experience is in comparison to others.

For example, let's say we're trying to increase the amount of PRs per week and then developer experience goes down, or we're trying to increase quality by adding in a bunch of manual QA steps and a bunch of other stuff that makes it really difficult. First of all, we should expect to see speed go down. That throughput of PRs is going to go down, but also the developer experience is going to go down. What we want to see is these numbers moving together. That's the real signal. That's why we need to see them all together and not look at one. Because looking at one even over time doesn't really tell us much information. We want to see how they're moving together.

What we want to see is increased speed and higher quality. Those two things should move together. A better developer experience that also moves with it. Then teams that have great developer experience, high quality, they're able to get work done. Also are able to spend more time on innovation, on new features versus keeping the lights on and maintenance. That's the fourth category.

This is something fairly new and novel for engineering or developer productivity framework, because what's been missing and what was missing for all of the leaders that we were working with in the field was how do I actually connect things like change failure rate, or developer experience back to the business and make it useful and meaningful for my peers or for the executive team that I have to present to. The way to do that is to help understand what those things get you, which is more time to spend on innovation, because a lot of this performance is contextual. If your competitor has more time to spend on innovation than you do because you have a poor developer experience, or they're investing more in their developer experience, it's a question of when, not if, they're going to outpace you and overtake you. We wanted to make the distinction and that connection really clear.

The developer experience is not ping pong and beer and better toilet paper for developers. That's not what this is. I think truthfully there is a bit of definition drift. We can maybe call it a developer experience. The marketability of it to executives and to people that care about the business and are making the decisions about where the money goes, might not see that this isn't about ping pong and beer. This is about reducing friction for the biggest investment that your company is making. The place where truly innovation is make or break. It's not a vitamin, it's a painkiller is how I try to phrase it. That's another reason that we need to have this conversation, talk about it with authority in terms of business impact. That's why including that impact dimension in the Core Four was critical to its success.

Charles Humble: I really like that. I mean, I've had a lot of conversations with people the other way where it's been, oh, the business won’t sponsor  me to pay down technical debt or whatever it is. I'm like, well, explain to them what that even means. Actually, can you explain to me what that even means? Because I'll wager your definition of my definition may well not be the same. A lot of the time, I think we use technical debt to mean stuff we don't like. It's that importance of tying it back to something. I did a bunch of shortcuts for O’Reilly last year and there is an example in those of “our app is crashing and people are abandoning us to check out or something like that, and we're losing one in five, and we think it'll take us four weeks to fix. We think that if we fix that, then that drop off should halve or something.” Then you've got something that you can actually measure. But also the business understands why you're doing the thing. As you say, it's not ping pong tables. It's not beer-at the end of the day, it's what impact is this actually having that The people who are sponsoring you, you can see and reason about. I really like the impact part of the framework.

Laura Tacho: Technical debt is a phrase or a term that I really don't like, but I still use it because when I talk to other technologists, they know exactly what I mean. But when I talk to people who are not technologists, they don't know what it means. I think for too long it's been used as a catchall phrase for things that we want to do again, things that we want to get rid of, things that you shouldn't ask us questions about because we know better than you. For a long time, engineering did sort of have a blank check to do a lot of what we wanted to do self-directed. The accountability piece wasn't necessarily there. I think we had a really good run. But that time is really over.

Accountability is here. Not that companies weren't holding engineering leaders accountable before, but it's a very different environment now in the last five years than it was if you looked fifteen years ago. There's a whole generation of engineering leaders who have really underdeveloped business case skills because they came up in their leadership during that time when we actually weren't really expected to talk about the ROI of this particular project. It was fine for us to wave our hands and say, we know better. Trust us, we have to do this. It wasn't a negotiation. We weren't trying to engage, okay. What should we do first? This customer facing thing or this internal customer facing thing? Because once we think about paying down technical debt as serving your internal customers of people who are developing on your platform, we can start to draw some parallels between an external customer facing thing, an internal customer facing thing, and calculate the ROI.

When I think about tech debt, I think about it in terms of let's actually be clear about what this is doing. Is this making our system more reliable? Is this helping us avoid a security issue? Is this helping us fix a current security issue, get really clear about what is happening. Then we need to know who's the audience who is impacting? Is this ten individual developers who work on this component twice a year, or does this impact something that every single developer in our company has to do once a week? It's taking them three times as long as it could. The ROI is going to be not just time recovered and the salary equivalent that we could put that, but also answer the question, what could we bring to market faster?

These are the ways that we need to start putting together business cases around technical debt in order for it to be relevant to the business. Going back to how we started our conversation today, you are a business leader if you are an engineering leader and you must think about your decision making in terms of money, that's the language of business. I will catch the flak and all the negative comments on LinkedIn whenever I say something like this. But better you hear it from me now and get mad about it. Then lose your job over it in twelve months time because you're not practicing what you need to practice in order to be an effective leader.

Charles Humble: 100%. I really like your point about the fact that lots of the current crop of leaders grew up in this time when it was not really held accountable in the way that it is now. That's a fantastic insight, I'd say, because you're right, it's a relatively new thing because we are such an expensive resource, now for all kinds of reasons, good and bad, people want to understand the impact of what you're doing? I think that is a relatively new thing that's a fantastic insight.

Laura Tacho: I have to-I don't have a ton of evidence yet other than my own anecdotes which, the plural of anecdote is not evidence. What I do find out there in my coaching practice are engineering leaders who are between the ages of late twenties to early forties who maybe started their started their engineering leadership journey after the 2008 economic crash. I think that's where I've been able to pinpoint it. These late twenties to early forties. Then when I coach leaders who are in their fifties or in their late fifties, sixties, they tend to have a different amount of business acumen. I've noticed this actually very recently where I was able to put together the hypothesis. I think it's actually because of when you came of age as an engineering leader in this sort of heyday where money was free and we could do whatever we wanted to—when it was about ping pong and beer. I'm interested to see how that plays out in the future and how long it takes for those leaders to correct themselves or educate themselves up to where they need to be in order to thrive in the further stages of their career.

Recommended talk:Prioritizing Technical Debt as if Time & Money Matters • Adam Tornhill • GOTO 2019

Choosing Frameworks and Avoiding Pitfalls

Charles Humble: We have a lot of different frameworks for measuring productivity. Obviously we've mentioned DORA and SPACE and Core Four and there are others. How would you go about choosing a framework? I presume you would say, well, I'll just use Core Four and everything is good. But in all seriousness, how would you pick a framework?

Laura Tacho: I think there's some organizational context that needs to be considered here. You need to understand who's the audience and what are we actually using this for. Because there's different use cases for metrics. Often leaders don't quite comprehend or think about the subtleties between those use cases. You should think about your metrics as a product-product mindset for metrics. Your metrics are a product. They need to be solving a problem for your customers that they are willing to pay for. I know we're not actually exchanging money here, but attention and use are the currency that they're using.

If you are trying to-you need metrics for local teams for continuous improvement so that they can identify bottlenecks in their own workflows. That's going to be a really different set of metrics than the metrics that you want to have on your board slides to report on once a quarter. I might recommend Core Four for that unifying common language across the whole organization to use for reporting, performance reporting, to use for reporting on the board. But an individual frontline team isn't going to have a lot of success improving software quality by only looking at the change failure rate. It's a great diagnostic metric for them to look at once a quarter. It's an output metric, but they might need smaller metrics that might actually not belong to a particular framework. It's that they're using the framework to inform what metrics they pick.

Of course, my short answer is always start with the Core Four, because it is a unification of DORA, SPACE, and DevEx. You don't have to choose between those because it's often set up, okay, you're going to do DORA or you're going to do SPACE. The reality is that you do both of them at the same time. That's really clearly expressed in how we designed the Core Four framework. My short answer, of course, is to start there. But truly think about your metrics as a product. Who are they for? What information are they looking for? What questions do they have to answer? What decisions do they have to make that is going to inform where you want to start, and how you want to sort of market or position these internally.

Charles Humble: We've talked with some metrics actually. But what are the pitfalls with the metrics? They seem to be having a bit of a moment, but do you have any concerns around using them?

Laura Tacho: They are having a bit of a moment. There's definitely a lot to be concerned about when introducing metrics. I'll give you a high altitude tour of all the things that can go wrong. You mentioned gamification of metrics at the beginning of our discussion around this. That is a real concern for a lot of leaders. My bottom line here is that we know how organizations are-sorry. We know how groups of people behave when metrics are being used to measure performance, especially in public or somewhat public, within a company. It's our responsibility to design a better system. If you're worried about gamification of the metrics, it's not a problem with your developers. It's a problem with the system of metrics. Using a robust framework like Core Four, which also includes developer experience and qualitative data, can help insulate against that. That's definitely one concern. It's not a reason not to use metrics, though. That threat of gamification is universally there.

There's also the argument that developers are going to feel spied on. They're going to lose trust in leadership, and that absolutely can happen. Make sure that you're being very transparent around what decisions are going to be made, who has access to the data, why it's being collected, all of these things. If you don't give answers to them, people make up their own answers. Don't let that happen.

I think the other challenge from a leadership perspective, and also the worry is, are we measuring the right things and are we missing a really important part of our performance or an important piece of the puzzle that because we don't have visibility into it, we're going to be making a mistake for eighteen months before we even realize that we're making a mistake? There's a lot of ways that people can use metrics incorrectly, over optimize on certain things, lose trust with our organization. So it needs to be really thoughtful and most importantly transparent with your organization about what these are doing, what purpose they serve, and then what decisions are being made with them and what happens next.

AI's Impact on Developer Productivity

Charles Humble: Thinking of things that have been having a bit of a moment, I know you've been studying gen AI in the context of software development. So what are some of the things that you found out there?

Laura Tacho: We actually have a new field guide to improving the use of gen AI in engineering organizations. This is a vendor neutral study that we looked at 180 companies and tried to understand the companies that are having a lot of success with gen AI tooling, what are they actually using it for? Some of the things were even, to me, very surprising.

One of the big themes, which was reiterated in the report that DORA did a few months ago about Gen AI, this was a standalone report, not their normal state of DevOps report, but a standalone Gen AI report. They found that companies that have very strong executive sponsorship, very clear compliance rules, very clear top down mandate, maybe not the right word, but very clear instructions on what's acceptable, what's not acceptable. Had I think it's a 451 if my memory serves me correctly, higher, a percent increase in adoption simply because developers weren't left asking the questions. Am I going to get fired for this? Is this okay? Does this violate our licensing? Is this-am I going to be replaced by AI because now I'm using AI to do my job? So clear definition of what's acceptable, what's not, compliance from the top is incredibly important for adoption.

Then on the other side is that like any tool, engineers need training and support in order to know how to use AI assisted development in the best way. So it's not I'm going to buy GitHub Copilot licenses for my whole organization. Then three months later, check in and magically we have ten x productivity. We still need to train engineers how to prompt well, how to do recursive prompting.

Surprisingly, one of the things that I'll share from the report was we found that code generation was actually not the most common use case for engineers who were using it to save the most time. It was stack trace analysis, particularly for, in the context of Java development. So being able to feed it this really long gross error and saying, tell me why? Why is this happening? What are your suggestions for fixing this? That is a huge time saver. Because the cognitive load to do that independently is so incredibly high and AI doesn't ever get tired. It's like has basically unlimited cognitive capacity. That was a huge time saver for developers who reported saving an hour or more by using Gen AI coding assistance each week.

Charles Humble: That's really interesting. I think in a way, there's a bit of this ties back to, to our conversation about early days of metrics  about measuring lines of code as developer productivity, which we've kind of moved away from because it's a terrible measure of productivity. I find some of the early marketing messages around Gen AI as well. You'll be able to code faster as if typing was the hard bit. But typing, it's never been the hard part. I mean, those of who do  this for a living the typing is not what's hard about that. I think that stack trace is actually a really good example of, oh, that makes total sense because you're not asking it to do anything creative, but it can absolutely analyze stuff it's seen before and say, well, it looks like this. That could be a huge time saver.

Laura Tacho: Yeah.

Measuring Manager Performance

Charles Humble: How do you go about-we've talked a lot about measuring developer performance. How do you measure manager performance? How do you go about that?

Laura Tacho: Again, something I think that's really nonstandard across the organization. If you even go and you can do a little experiment, if you go to LinkedIn and you look at job descriptions for software engineers, we've sort of landed on some common definition of what a software engineer one versus a software engineer two or senior software engineer like those distinctions in those career ladders, because those roles are hugely prevalent in our industry. There's a lot of them. We have some standard definition. Management is not quite as standardized. Company to company, there's a lot of different criteria. Managers have, in my opinion, a few essential functions they need to support their team, and they need to keep their team unblocked. I think if you don't feel that that's happening, there's a few things that you can try to do about it. Of course, managing up is a skill in and of itself that I think every software engineer needs to understand how to do. It's a very important career skill. If there's information that you're not getting from your manager or ways that you believe that they are blocking you or holding you back, try to understand why that might be and figure out the right combination of messaging from you in order to get the information that you need. Of course, if that doesn't work, most organizations should be doing or do do skip level check ins. This is talking to someone who's not your manager, but the person, your manager's boss, essentially. Sharing feedback that way can be a good way.

I think the other way, which is, maybe unconventional, is talking about these problems in public. I am a very big proponent of public performance management. This is probably one of my more controversial management philosophies, but I believe that teams need to be coached as a team, and you're not going to find a sports team with the coach bringing everyone into a private room to tell them to run a drill again, it happens in public and seeing someone else have performance corrected or their work corrected helps the team get stronger. It helps reinforce what good looks like, and that should happen not just from the manager down, but also from the team up. Having these conversations about, hey, maybe it's in a retro or maybe it's in a different venue. But saying we feel that we're not able to do-we can't make this decision, this decision, this decision, because we're missing information ABC that shouldn't be a personal conflict between two people. This is a business conflict between the team and the person responsible for managing them. I think those kinds of business conflicts are appropriate to bring up in public as a team and work through them. There's more accountability that way, and it reinforces the expectations for everyone on each side. That's what I would do if I felt that I was, maybe had a manager that wasn't meeting my own expectations or was trying to figure out, what my next move would be there.

Recommendations and Resources

Charles Humble: That's really interesting. I haven't heard that approach before, but I think it makes total sense. That's really fascinating. We are more or less at time. So maybe to bring this to some sort of conclusion. Do you have any recommendations of books or other resources that people might find helpful, perhaps stepping into a CTO or VP type role for the first time?

Laura Tacho: There's two books from Will Larsen that I'll recommend. One is An Elegant Puzzle. This book is really practical for engineering leaders, I think, of all levels. There's resources and tables and things to fill out and do to help you understand your map of what's around you and process design and other things. That's one. Then An Engineering Executive's Primer is a newer book from him that's more about the executive role. I would definitely recommend both of those two.

I think the other side of my recommendation would be to lean more into your cross-functional, your adjacent functions. For example, understanding more about product management and making decisions in product or understanding more of the go to market side, which I think is a big deficit for a lot of engineering leaders. But make sure that your literature library, when you're moving into this role, really reflects the scope of your role and doesn't stay only focused in engineering. You'll see a lot of those themes reinforced in The Engineering Executive's Primer as well. Make sure that you're becoming a robust business leader and not really, really good technologist.

Charles Humble: Fantastic. Laura, thank you so much for your time today. I really enjoyed that conversation.

Laura Tacho: Lovely conversation. Thanks so much.