Home Gotopia Articles How to Deal wit...

How to Deal with Software Complexity

Charles Humble interviews Dr. Gail Murphy about the challenges in software engineering today. They discuss how productivity isn’t just about lines of code but is more about focus and minimizing task-switching. Gail also talks about the difficulty of managing the rapid evolution of system architectures, stressing the need for regular restructuring and refactoring to avoid issues like increased coupling and decreased performance. The conversation moves to open-source development, where Gail highlights how using open-source components can create complex, brittle dependencies, and the need for better communication within these ecosystems. They wrap up by discussing the evolving role of technical leadership in navigating these challenges.

Share on:
linkedin facebook
Copied!

About the experts

Charles Humble

Charles Humble ( interviewer )

Freelance Techie, Podcaster, Editor, Author & Consultant

Gail Murphy

Gail Murphy ( expert )

VP Research & Innovation & Professor of CS at The University of British Columbia

Read further

Introduction

Charles Humble: Hello and welcome to this episode of the GOTO Unscripted. I'm Charles Humble. I'm a freelance techie, editor, author, and consultant, and this is the seventh episode in the series of podcasts that I'm doing for guys here talking to software engineering leaders. Today I'm joined by Dr. Gail Murphy. She is a professor in the Department of Computer Science and vice president, Research and Innovation at the University of British Columbia.

She was also a co-founder of TaskTop Technologies. Her research interests are in software engineering, with a particular focus on improving the productivity of knowledge workers, including software developers. I've learned so much from her work over the years, having first come across her via a paper called Who Should Fix This Bug, which presents an approach to semi-automating the assignment of a bug report to a developer with the appropriate expertise to resolve the report using a supervised machine learning algorithm. Dr. Murphy, welcome to the podcast.

Gail Murphy: Thanks a lot, Charles. Thanks for having me. And it's just Gail. So let's be really informal here.

Charles Humble: Okay. Thank you. So what's the current focus of your research?

Gail Murphy: Always a great question because, you know, it feels like sometimes it changes day to day. But we've been doing a lot of work on developer productivity for a number of years and trying to understand the conditions that, that affect when people feel that they're being productive or not, and how there can be trade-offs between, when they are trying to be productive themselves versus helping members of their team and how that affects team productivity.

Defining Developer Productivity: Measuring Impact & Overcoming Friction

Charles Humble: Right? Yes. And productivity seems to be very much in the news moments. I think partly in the consequences, you know, the rise of generative AI for automating some elements of co-creation and the argument that that makes us more productive. I'd be very interested to get your perspective on that. Maybe you could start by talking about why defining developer productivity is still kind of quite elusive.

Gail Murphy: That's really interesting, right? Because we've been trying as a community to to think about developer productivity since we started programing really, early, early attempts really looked at whether or not we could judge how productive a particular programmer was by the number of lines of code that they were writing in a given amount of time, and it didn't take very long, I think, for the community to realize that was a really suboptimal metric, because depending on the kind of programing language that you used, the metric might look really different.

Or depending on the kind of task that you were doing, you might spend several days tracking down a particular bug and writing. One line of really crucial code is whether the developer is productive or not? I think most of us would think if the quality went up, that was probably time well spent. And then people tried different approaches, right? They looked at things like function points to say, can we abstract away from the code and think about, what was the sort of value that was being added to, to the software and could we measure it that way?

Turns out that was pretty hard. And then we sort of switched to things like, how can we eliminate waste? So all of the lean production work, pair programming, how can we actually make the developers be more efficient in what they're producing and producing the right kind of stuff? We've kind of taken that a little bit further and said, well, how can we how can we improve productivity by having developers perceive that they're more productive and actually increasing the factors when they they have that feeling and help them get into the flow and feel better about their work and as a result, actually produce more for the organization that they're working for.

Charles Humble: I know you've talked to many developers in the course of your research about how they think about their own productivity. So what have you found?

Gail Murphy: Well, it's always an interesting discussion. We do a lot of different, empirical methods to try to think about, understand, and investigate productivity. So sometimes that surveys sometimes that's actually sitting behind a developer and watching them work. Sometimes it's putting logging or tracking on their machine to understand at a really detailed level what they're doing, minute by minute.

So we use a lot of different techniques. And I would say there's, you know, there's a couple of themes that come out. There is no one way that developers measure themselves in terms of their productivity. Some think that commits are a really good way. Others. Others think that the number of bugs that they have solved, way down on the list, is always the number of lines of code.

Going back to that thought, we find that half of them think meetings are super productive, and some of them would rather never have another meeting in their life. They think it's an unproductive thing for them to be doing. But there are some trends that do start to come out about how they perceive their productivity in terms of their ability to, really focus on their work, their ability to be engaged and their ability to, to try to eliminate or minimize the number of times that they're switching between different tasks.

So those are some of the factors that come to mind in terms of the global learnings that we've had out of some of the studies that we've run.

Charles Humble:  Have you looked at all the productivity impacts of AI, generative AI. And if you have, what are your observations on that specifically?

Gail Murphy: Well, certainly that's kind of the rage in software engineering research now is to at least have been looking at how generative AI can, can help any kind of tasks that a software engineer is doing. Of course, generative AI has been totally amazing at the leaps and bounds that it has been improving. So we've been doing some work in trying to understand where do developers choose to use generative AI?

Where do they see it making a difference in their work? And certainly you see them using it to understand pieces of code that they might come upon. So using these tools to be able to summarize what that code might be trying to do, we're all aware of copilot and its ability to to find a snippet of code that it knows about and, and recommend that as, as a developer is working, we see them using it to, to look up documentation.

So instead of doing a Google search about a framework or an API, they're going to do a generative AI query and learn maybe an example use of how you might use that, that piece of AI. But we also see some changes in terms of going to the tool and asking a question either to prepare themselves to talk to their human teammate.

Or sometimes instead of talking to their human teammate. So I think we're going to face some really interesting challenges down the road. If we see generative AI being used more because there might be a decrease in the knowledge that's being shared amongst the team, and that could have long term ramifications on the structure of the software and the quality of the software that you're building.

Recommended talk: Balancing Tech & Human Creativity • Kaiser, Greiler, Carpenter, Terhorst-North & Wardley • GOTO 2024

Software Architecture’s Role in Developer Productivity

Charles Humble: I think it's fascinating. How do you see the role of architects in the context of productivity?

Gail Murphy: Architecture is always such a difficult thing to talk about in a way, because in so many systems it feels that there often isn't a lot of time spent anymore on the architecture of the system. It's sort of set at the beginning, or it evolves as it's going on. And so we know that there are challenges that happen in the architecture simply emerging, from the code that the developers are writing, instead of maybe there being that upfront kind of architectural design and adherence to to design that 20 years ago, you know, with sort of the perception of how systems were built.

So it's going to be really interesting to see that as people glue more code together. That's being recommended, let's say from a generative AI agent. What's it going to do to the architecture? Are we going to be able to predict the performance of the system as it grows this way? Are we going to know anything about where it's going to be changeable?

Are we going to lose modularity as we grow the system this way? So huge implications, I think, for the overall structure or architecture of the system, if we're not paying specific attention to that as we build the systems in this new kind of way.

Charles Humble: Right, yes. And I know your research group also looked at how you match the perceived architecture of the actual architecture. So the sort of idea of architectural drift. So can you talk about that a bit?

Gail Murphy: Absolutely. We, actually, did my own PhD thesis research a long time ago, and we were in an era at that time where there was a lot of attention being paid to. How could we write down the architecture of a system more precisely, analyze it before we even wrote any code, or perhaps we could do it in a model oriented way and actually generate the system from the architecture.

I had been working as a software developer in industry at that time, and it seemed to me that, you know, being able to predict and write down the architecture upfront and have that stay stable as you built the system, was not what I had experienced as a developer myself. So we built some, a really, actually pretty simple tool that just allowed a developer to sketch down in boxes and arrows.

Here's what I think the architecture is. And then we had a language to be able to make it really easy to say these parts of the code relate to these parts of this box scenario diagram. And then we could extract information from the actual source code, like where were the coupling dependencies. From a structural perspective where we're components interacting from a runtime perspective and map these two worlds together.

And so we came up with a diagram we called a reflection model that showed where your predictions as a developer in the architecture were still holding in the source code, and the system where there were divergences. So there might have been one component calling another that you had never expected, and where you thought there would be interactions that actually weren't interacting.

And so this turned out to be a really good way to go up to a large body of source code. Pretty quickly write down, here's how I think it works, and then try to, try to do this mapping and find out that typically you would find out that there was a lot more coupling and connectedness in the system than you had predicted from the architecture.

And some of those things were for a good reason. Right? Like to make it performant, two pieces had to talk directly to each other. So we had the opportunity to work with the Microsoft Excel group, with this and, you know, be able to, to have the developers of that system start to learn about where the architecture might be different than they had actually anticipated, when they actually were able to sort of raise the level of, of, of abstraction from the source code and from the system.

Charles Humble: It's interesting you mentioned coupling, because something I've seen more recently is sort of bi directional coupling occurring in microservices systems, again, without anyone really realizing that that's even happening. And I've been sort of wondering if that's a symptom of the same thing, if that sort of architectural drift.

Recommended talk: When To Use Microservices (And When Not To!) • Sam Newman & Martin Fowler • GOTO 2020

Gail Murphy: You know, generally I it's always hard to say without looking at a particular system, but it's not that developers, when they're actually doing the implementation, are trying to subvert an architecture that's been in place. It's more they're trying to find a service in this case that they need for the code that they're writing. And so those dependencies just creep in over time.

And if people aren't paying attention to that, then you can end up in a situation where you can't make the changes that you want to make anymore, because the system has become so coupled together and so connected. So it's a real it's really a hard thing to do to, to balance that need for performance, that need to get the job done with respect for longer architectural considerations and a place where I think we probably still don't spend enough time, when we're developing software of kind of thinking about that architectural draft and having enough time to come back and say, are we meeting the conceptual integrity and Fred Brooks, terms of the system we're trying to build?

Charles Humble: Can you unpack that a little bit more for me? What are some of the sort of symptoms that you would expect to see in terms of architectural trust?

Gail Murphy: I think, some of the system symptoms that we would typically see would be a lack of changeability so that from a, perspective, that you think the architecture is that you should be able to slot a new component in and stitch a few pieces up and suddenly have that functionality added into the system, and instead that estimate you had of maybe a week of adding that feature to to the system.

And suddenly four weeks later and you're trying to understand why there's some secondary interaction that you never expected in the system that's emerged because there's this coupling there. So I think that one is the changeability. I think we've all experienced that in some of these systems, the performance tends to decrease over time. Because you might have more going on in that system than you ever expected to be going on.

And then of course, there's the quality issue, right? If you don't really know how the system works and you start to put more stuff in, not unexpectedly, we get bugs coming out the other side that no one, no one anticipated to be arising.

Recommended talk: Learning Systems Thinking • Diana Montalion & Charles Humble • GOTO 2024

Complexity Challenges in Modern Systems

Charles Humble: Right. And obviously different architectures have different tradeoffs. But something that I've always found immensely challenging is how you kind of predict what is likely to need to change ahead of time. And that's why you put that sort of flex point in the architecture, as it were.

Gail Murphy: That's why things like restructuring and refactoring are so important, right? Because I don't think any of us really know for most of the systems being built, exactly what's going to need to change a year from now. And so whatever architecture we have up front, we've designed it to allow certain kinds of changes in the future, and then the environment around the system changes.

And you need to, you need to be able to put different things in than you expected. It's really hard to still do that restructuring and refactoring. Right. Because it takes a lot of thoughtfulness, a lot of careful work and even though we have some tooling that can help, it really is a hit to the march of we need to add new features to the system.

And so being able to take that step back, try to restructure things from time to time, try to get out of that technical debt that you've really, really accumulated is a tough thing for organizations to fit into the cycle of their business. But sometimes it's a necessary thing for the lifetime of the business.

Charles Humble: Yes. Absolutely. And it is that thing of, you know, as I say, if we're evolving quickly and we're adding features and we're changing, at some point we're going to have to go back and revisit all of that. And you know what point you do that, as always, is always very tricky. I think. There's something else which is somewhat related, which is the, the, the extraordinary complexity of the systems that we now build.

And I'm not quite sure I have to articulate this because I think, you know, we now build systems from a multitude of open source libraries and frameworks. And we can build faster, but we lose the ability, I think, to really understand how our system works. So maybe we don't have quite the right engineering principles yet to manage it. What are your thoughts on that?

Gail Murphy: You know, I remember at the start of my career, right? The whole excitement was really how do we build systems? More like they were Lego. You know, the amazing flexibility of a Lego block system where as a kid you can imagine things and put them together so, so easily and build these amazing creations. And I remember it was Brad Cox, I think way back in the day that was talking a lot about software components and trying to build things like Lego blocks.

We've gotten a long way towards that now, really. You can sit down and with node and other frameworks, there's so much you can build so quickly, but we haven't really thought about the interactions that start happening as you build systems in that way. So how do you analyze that whole system? How do you think about all the code that's in there?

And we're starting to create these really, really long software supply chains where with the infamous, you know, left pad example taking down the internet, we have such a line of dependencies that we have no line of sight to that. We really, I think, don't have much predictive power about where that system is brittle. How might that system change as libraries evolve?

If you think of the DevOps type reports that are out there and the state of open source, you know, we know that there's a lot of vulnerabilities sitting around in systems simply because people haven't updated the components that they built it with. And there were vulnerabilities in those components. So I think we need a new way, a new approach to software development to start thinking about some of these system level emergent properties that come about after we're able to sit down with our Lego Lego blocks, software components and snap them together.

And now we really have to think from an engineering perspective of what is it that we actually just created?

Charles Humble:I have a relation thing. This is something I've actually discussed at length with a friend of mine. Here is, I'm going to say he's the most brilliant engineer that I know far better than I ever was, but he has a particular frustration that she bangs on me endlessly about, which is that we can build these things really quickly by wiring them together.

Open Source Ecosystem

Charles Humble: But a lot of the components we build on top of, actually that great because we have this tendency to kind of start again all the time rather than making things better and making them mature. And obviously, if you think about this, you can come up with counterexamples. But I think the premise is at least at least interesting.

Do you kind of agree with that as an observation?

Gail Murphy: I think that that actually really resonates with me. And I think one of the interesting things about the open source ecosystems is we often pay a lot of attention to the actual source code and the actual components. We spend some amount of time thinking about what's the ecosystem around the components. So maybe if I'm going to choose to use a particular library, I might look to see if the issues are really active, right?

Are problems being solved? Does it seem to be used as a start on GitHub? But one thing we don't actually pay that much attention to is the communication that surrounds that ecosystem. So we've done some work in looking at open source ecosystems and the kinds of social and technical dependencies that exist between components. So as an example, if you were a software developer today, right.

And you go and you choose a component that's part of a git GitHub repo to, to depend upon, you're essentially making a technical dependency to that component. When you add it to your code, there's some requirement that you also think about the social dependency you're about to make on that code. So you're probably going to need to to monitor the health of that component.

You might need to be filing bugs into the issue repository for that component. And you would hope then to have communication with the developers at some level, unless it's a super stable component that you're using. When we've done these studies, we've actually seen what we call backward social dependencies. So where the individuals who have created the library component, actually, when they do upgrades, will sometimes reach back out to those that depend upon them to say, hey, we've updated something you should update.

And so we spent a lot of time on that technical piece of wiring it up. I think there's a lot more that we could do about what people to people communication do we actually need so that we're improving the health of those components, trying to make better components by having this bidirectional relationship with the users of that component, if that makes any sense.

Recommended talk:Structures Shape Results: Software Insights • Elisabeth Hendrickson & Charles Humble • GOTO 2024

Charles Humble: Yes. As you were talking, I was thinking about science views. Jane came to this series, specifically around the book and Doctor Steven Spear, published last a coauthor in The Wedding Organization. And the core of that book is really an examination of the way communication works in large organizations. And actually, I've also taught very recently at Diana Montalion, and he's done a lot of work on systems, systems thinking.

And part of her research was around how organizational success is related to the kind of knowledge flow within an organization, but that conflicts with the way we often judge employee performance for various reasons. So as he was speaking, I was sort of wondering what your studies around open source tell us about communication patterns within open source projects and how that might sort of relate?

Gail Murphy: We haven't done, we haven't done a lot in that area. But we have looked at these ecosystems, both, the Ruby ecosystem and the Java ecosystem. And we do see similar trends in terms of the most of the interactions being really the technical ones. I'm going to go out and I'm going to depend upon you a little bit of the social dependencies following that, but less of this interaction, shall we say, between different repos and different components.

Gail Murphy: The actual people who build them, you can actually map it out to see that most of it seems to be based on preexisting social relationships between the developers. They know each other, or they're from the same organization. And so we know from the writings that, Jean's done and others that it's so important to think about the structures, the organizational structure, the communication structures.

And we're seeing tools come out, you know, thought tools about how you do that kind of design within your organization. But as, as you've mentioned, Charles, open source is such a big thing. And we're not thinking about what that means through these open source ecosystem systems that have multiple organizations interacting and that that's a whole other level that I hope, people like Jean and others will start to look at in the future.

Charles Humble: There's a sort of related question now, which is that the sort of the Conway's Law, as is the classic that essentially the software we build reflects the organizational structure that Bill said. But again, in the context of open source, what does that even look like? Because there's an open source project. The organizational structure is kind of often pretty, and has kind of evolved pretty organically.

So what effect does that have on the structure of the software?

Gail Murphy: Super good question. Right. I think that we know it can end up being kind of an interconnected mess in, in at least some cases. But as I said, you know, so many of the open source projects really end up with interdependencies because people know each other. And so what is that really as much open source in the, in the sort of most purest sense that we think about that anybody can go and grab this, this component and use it, or is there something else going on in terms of of how that open source ecosystem itself is evolving and the people that are involved in that?

It'd be really interesting to go in and do more mapping of the individuals. It's often hard to tell who's who because they use different identifiers in different systems. But, but really try to unpack some of these social technical interdependencies and how those mappings evolve over time.

Research Opportunities in Open Source

Charles Humble: Yes, yes. This might be, might be a sort of obvious question, really, but what you've obviously spent a lot of time looking at, I resource projects in general. So what is it about those projects that make them a good place for the kinds of research or the kind of research that you're doing?

Gail Murphy: Great question. I think they both are a great source of a place to do research and a terrible sort of a place to do research. You know, they're obviously very attractive to the software engineering research community. If you're especially looking at source code analysis, kind of the quality of the systems, because you've just got access to repos and repos and tons of material as you're raw data to work with.

As soon as you start to want to ask questions about social interactions, organizational structure, and process, it gets a lot harder to do that kind of research, and people end up working more within the closed source corporation type world. Another interesting thing that happens when you look at open source is, you're lacking a lot of, real quality information.

You don't know that much about how people are actually working together to build that software. And so you're missing massive interesting data points that are affecting the engineering of that system. And you often don't know exactly where that community is going with it. So it's a very particular kind of development often. And one in which you can almost think of it as being blindfolded for some of the questions that you want to ask.

Well, open source has been fantastic for software engineering research in, in a way, I think is also limited. The kinds of questions we've been asking and the kind of conversation we're having now suggests that we have to do a lot more to either get. I must do an anthropological sort of study and be embedded in those systems and watch them evolve and understand some of those processes, or what many of us do is we also try to work with closed source situations because of the richness of the environment that you can start to learn about.

Charles Humble: Yes. Now, as I understood by your Excel example, this is a counter example because that's such a to me. I mean, I've never seen the actual code for it, but as someone who has used Excel throughout my entire career at times and quite a lot of that, I just think it's an astonishing piece of software, let's say, in its own right, for all sorts of reasons.

So it's a it's a sort of interesting counter-example, I think, to be able to go and look and I see a proprietary system that's used, you know, on pretty much probably every desktop computer in the world, I should imagine, is running at some point or has run ins at some point. Just very fast, nice and kind of alternative perspectives. I think.

Gail Murphy: Exactly like a system that's evolved for, for so very many years and is at the crux of so many businesses. Right? So many people, as you said, depend upon it. So obviously has, you know, a lot of pressure on that system to continue to deliver the functionality it delivers, but presumably always has a need to start integrating new features.

So really interesting to be able to to think about systems like that that have been around for, I want to say 30 years or more.

Charles Humble: Yeah, I would think first and probably yes, maybe more even. But yeah, about 30 years I was like, yeah, yeah. And has, you know, has it various times had different full sort of programing languages, both basic at the macro language and then Basic and then Visual Basic for applications and the, the formula language, which is, you know, a sort of functional programing language, which is in itself when you stop and think about that, fairly astonishing, the level of expressiveness that you can have.

And so formulae in Excel, this is something else. AI is a focus for all sorts of reasons. I because it's probably the largest functional programming environment in the world, but no one ever thinks of it as a functional programming environment. But kind of is.

Gail Murphy: It’s kind of astonishing what we can do with that piece of software, and it's really such a testament to how software has changed the world. Right?

It's really kind of mind boggling. We get so used to the fact that we can, can use that, that it's almost hard to explain to your kids that there was a time when you had to do that on pen and paper, with a calculator at your side.

Charles Humble: Now it's, it's, it's a, it's an extraordinary example, I think.

Reflections on Technical Leadership

Charles Humble:  We're running towards the end of our time. The main theme for this series of podcasts is really around technical leadership. So perhaps we can finish off by maybe what do you think the most valuable or the most important things are for a software leader to understand?

Gail Murphy: Wow. Leave the hardest question. Still ask.

A little bit. But you know what it was that I'm always stuck with and and when I used to do more teaching, what was fascinating to me was teaching introductory software development. How much one always still has to learn, and how hard it is to teach some seemingly simple concepts. So we talked about a few of them today, right?

I mean, if you're teaching intro software development, you talk about modularity, you talk about cohesion, you talk about coupling. And if I just use those three words, we all sort of have an idea what they mean. But we and we can define them technically at some level, but they have massive ramifications as we build these really complex artifacts that we build in software.

And yet we don't really have fantastic principles to give students, for instance, to say, well, here's how you exactly modularize and this is why you would modularize this way rather than that way. There's lots of design approaches. There's lots of design patterns, but it's still so much of an art at some level for this problem. How do I modularize it to meet these kinds of changes in the future?

How do we build things so that they are performant? They have quality, they have the right coupling. That doesn't just mimic the structure of our organization. So I think some of the super basic concepts are the ones that I always go back to.

Charles Humble: That's a great answer. And I think and I think it's one of those things where. Just as, you know, seemingly very simple concepts, my understanding must have evolved so much over 30 or so years. I've been in the industry roughly 30 years, I think at this point that the, the, you know, my understanding at 23 versus my understanding now, all very, very different. That's right. I think it's just really useful, I think, to revisit those kinds of things that that assumption I had, all that understanding I had was a little bit, a little bit off.

It's fascinating. I think. That's a brilliant answer, gal. Thank you very much indeed. I've really enjoyed chatting to you today. And as I say, thank you very much for joining me on this episode of the GOTO Unscripted.

Gail Murphy: Thanks a lot, Charles. Always a pleasure. Thanks for having me on the.