Home Gotopia Articles Quarkus Unveiled...

Quarkus Unveiled: Efficiency & Green Impact

Charles Humble and Holly Cummins delve into the transformative power of Quarkus in the Java ecosystem. From addressing compatibility challenges to reflecting on GraalVM's impact, the discussion unfolds the nuances of Quarkus adoption, its influence on workloads, and the surprising environmental efficiency it brings. Discover how Quarkus is reshaping microservices deployment confidence and making strides in sustainability, offering developers a paradigm shift that not only enhances efficiency but also aligns with the crucial need for environmental responsibility. Join the dialogue to stay informed about the latest developments and insights driving the evolution of Java in the era of Quarkus.

Share on:
linkedin facebook
Copied!

About the experts

Charles Humble

Charles Humble ( interviewer )

Freelance Techie, Podcaster, Editor, Author & Consultant

Holly Cummins

Holly Cummins ( author )

Senior Principal Software Engineer on the Red Hat Quarkus team

Read further

Challenges in Shoehorning Architectures

Charles Humble: So, welcome to this episode of GOTO Unscripted. We are live at GOTO Aarhus, which is very exciting. It's my first time in Denmark and indeed my first time at GOTO, so that's been lovely. I'm Charles Humble. I am currently Container Solutions chief editor and I'm joined by Holly Cummins from Red Hat, where she is a senior principal software engineer in the Quarkus team. I got that right?

Holly Cummins: More or less. 

Charles Humble: It's a very long job title. We met up last night over dinner at the conference dinner briefly, where the brilliant Sam Aaron was doing his live coding DJing thing, which was phenomenal. In the middle of that, we were talking at the same table, but to different people. And we both had the same cloud-native story, which was so funny, which is essentially this thing of, we are going to microservices because we want to move faster, but then we have a change in control board who meets, like, twice a year and said, yeah, possibly microservices are not the answer to your problem. I found myself thinking as we were chatting off camera, that there is this thing about sort of shoehorning architectures into cultures where they don't belong. And I thought with some of your background from your consulting days at IBM garage, that might be just an interesting sort of thread to pull on a little bit. So, do you have sort of thoughts on that?

Recommended talk: Sonic Pi - BEAM Up The VJ! • Sam Aaron • GOTO 2023

Holly Cummins: I think there's sort of two anti-patterns that we see. One is shoehorning the architecture into a place where it just doesn't fit. It's solving the wrong problem. The other thing I see a lot is maybe the architecture is the right architecture, but we haven't looked enough at the surrounding context and tried to figure out, well, what's necessary for this to be successful. So, with microservices for example, what happens a lot is, first of all, a lot of organizations don't ask what problem am I trying to solve by going to microservices. They assume that because everybody else is going to microservices, it must be the right thing for them. But then once we go one step further than that, it's, okay, I want to go to microservices because I'd like to move faster, which is a reasonable goal.

And that's something that the style is suited to. Then it's, okay, so I've switched to microservices, so I've made my application distributed and now I'm going to go faster. Well, no, making your application distributed doesn't make it go faster. What you have to then look at is, well, how often do I release? What's the process for a release? Also, how do I test these things? And if I have to test everything in a big homogenous monolith, because otherwise, I don't have the confidence, then actually the fact that there's distributed communication doesn't matter because it is deployed as a monolith. So, then it's just, yeah, that slightly bigger picture of what are the conditions for success? Have I looked outside the code to make sure that those conditions for success are there?

Charles Humble: Yes. I mean, there's a couple of things there. So, there's the classic getting your sort of service boundaries right, so you can deploy things independently, which is incredibly hard a lot of the time. I have certainly worked on systems and I wouldn't be surprised if you have as well, where it's like, we've got a bunch of microservices, but this lot will have to be updated at the same time because otherwise the whole thing breaks, right?

Holly Cummins: Yes, exactly.

Charles Humble: And what you've done is make everything more complicated.

Holly Cummins: Exactly.

Charles Humble: But then also I think there's the sort of cultural aspects of it's a bit, like, I mean, there was that time when everyone was doing a sort of agile transformation, and I think now we hear agile transformation and kind of shiver a little bit inside, right?

Holly Cummins: Yes. And yet it's still going on.

Charles Humble: And it's still happening. But it's that because it's you're trying to... for this to be effective, you need to change the way people work. If you want to move faster, then that requires...I mean, there are various reasons why people go this route, but that's certainly one of the ways, reasons that people go this route, and that requires you to work in smaller chunks, have a less work in progress, all the stuff that we kind of know. But in large enterprises, in my experience, actually making that change stick is incredibly difficult. And I think that's kind of interesting as well.

Holly Cummins: I think a lot of it is about trust, and there are two sides to the trust. One is the cultural side of whether is there a culture of fear or do people have autonomy or does everything have to go through a central silo of regulation and control? And part of that is about regulated industries, and that ends up being more necessary but not entirely necessary. But then the other part of it is the technical trust. Often the reason organizations are unwilling to deploy their microservices individually is because there's... You know, it's not a paranoia. There is quite a high likelihood that if deployed individually, they would break. So, it makes sense to have a QE phase where everything is tested in a batch.

But the thing is that both are resolvable. There are things like contract testing buzz testing and matrix testing, and that kind of thing to some extent can give an organization the confidence to be able to deploy without having to do the testing of everything in a big batch. Which gives you such a whim for the speed and brings you much closer to those goals that you were probably trying to achieve by going to microservices.

Charles Humble: Yes. I've also seen the thing of developers or senior developers or architects being kind of attached to particular sort of styles of architecture that maybe don't quite work in the distributed world. The classic used to be, that you had a load of microservices and then one central database. I think we've probably got away from that more or less, but you do see things like, you know, I really want transactions, but distributed transaction coordination is quite hard. And you get into that, you know, I need sort of compensation, compensating transactions or, you know, sort of saga pattern stuff. I wonder if there's a sort of connection there in the same way that changing the way people work at a cultural level to changing the way software architects kind of think about what they're doing.

Recommended talk: When To Use Microservices (And When Not To!) • Sam Newman & Martin Fowler • GOTO 2020

Holly Cummins: I think so. And I think it's really easy to sort of judge as well and to go, "Oh, look, you're doing things the way that you used to do them 5 or 10 years ago. Silly you." But there's such a lot of cognitive load and now, you know, there's sort of all these interesting charts that show how our cognitive load as developers has increased so much compared to where it used to be. And so, then that does mean that realistically it's hard to keep up and, you know, it's not because we're stupid, it's just because a lot is coming at us and sometimes you have to say, okay, a new style of transactions, that's just a bit too much for me to digest at the moment. I will stick with what I know because that's what humans are good at.

Evolution from Proprietary to Open Source

Charles Humble: I think something else I was sort of reflecting on quite a lot, I mean, we started in the industry similar time. I think I was a little before you. But that sort of point where everything was sort of proprietary and, you know, everything was closed source and knowledge sharing was a real problem. In the second part of my career, I spent a lot of time just thinking about how we as an industry get better at sharing knowledge and passing on what we know. But I was interested to get your reflections on that. Because you've kind of come through that same sort of transformation of a sort in our industry, I guess.

Holly Cummins: I just think it's so incredibly positive and it feels almost utopian actually, the sort of shift from proprietary to open source and it just seems to benefit everybody, which is nice. It's nice when you get something where it's not, you know, there's one set of winners and there's one set of losers. All of us are benefiting. So, you have the sort of enterprises, like, I work for RedHat now, they've made a very successful business on 100% open source. But then for me as an individual developer as well, it's just so delightful that if I need something, I can go and I can access it and it's open source. If I find it doesn't work, I can fix it because it's open source. It's just it's hard to imagine, even now going back to the days when everything was just so closed and you didn't know what was going on under the covers. Sometimes there would be quite a simple bug, but you couldn't fix it because you just didn't have that access.

Charles Humble: It's a huge, huge change. And of course, we also came up through the sort of very early stages of Java. So, if I remember right, you were on the WebSphere team.

Holly Cummins: I was, yes.

Charles Humble: And then on WebSphere Liberty, I think it was called, the sort of...

Holly Cummins: Yes, exactly.

Charles Humble: The kind of the sort of sequel as it were to the original WebSphere product. I think it's quite interesting to think about that as well because Java has been around for a very, very long time. It's still a very widely used language, but the world that it was kind of conceived for from an architectural or a software point of view is radically different from the world we find ourselves in now. WebSphere was designed to...it was slow to start, but then you, it would sit and run pretty much forever if you left it alone.

Holly Cummins: Exactly. Forever. And the scale of forever, I think, is one that sort of doesn't even make sense to us now, that it would go and it would not be restarted for six months, a year, that kind of thing. It was designed to be incredibly dynamic because you were changing the engine as the plane was running. And then when we went to WebSphere Liberty, we made it even more dynamic in fact, so you could change every single part of it as it was running.

There was an interesting thing with WebSphere Liberty actually, because originally WebSphere wasn't conceived for the cloud. After all, the cloud didn't exist when it was being written. And with WebSphere Liberty, the problem that they started out trying to solve was how do we keep the same programming model as people are used to with WebSphere.

100% compatibility really, but make it a friendlier experience for developers because developers do not start their application server and leave it running for six months. That's just not how we work. And so, we made it so that it was much quicker and lighter to start and that kind of thing. And that was just at around the dawn of the cloud. Then we realized that catering to those developer requirements made it incredibly well suited for the cloud, which was something that, was just a happy coincidence. And I think if you said, "Did you know that developers are exactly like clouds?" it would be a nonsensical statement.

But the requirements end up being similar. And we've sort of seen it a little bit that sort of that same thing. Now I work on the Quarkus team and Quarkus was very much designed ground up to be a cloud native way of running Java. And so, with the dynamism that you have in more traditional runtimes, you pay a tax for that. But when you're running in a container, you don't need that dynamism. So, it makes no sense to be paying the tax. So, things like reflection are expensive at runtime.

What we've done with Quarkus is we've massively reduced the amount of reflection. We've moved more to be build time optimized. And then that means that at runtime you have a quicker startup, which may or may not be helpful but you have lower memory requirements, which almost certainly is helpful because in the cloud memory is money and you have faster throughput, which is always helpful.

But then as sort of a consequence of that, it goes in the other direction because it does more at build time for a developer, that's not necessarily ideal because we're building all of the time. So, then the question is, okay, so what do we do so that you don't have to do all of that build time optimization for the whole application every time you change a line of code? And so, what they've done is they've built a good live reload experience and a good continuous testing experience so that you have...again, it's that sort of that same combination of an optimized for the cloud, but also this delightful developer experience.

Then there's sort of two aspects of the developer experience. One is just the liveness and the hotness, but then there's another, again, I think slightly unintended consequence, which is if you do more at build time, you understand the context in which the application is running much better. This means that a whole bunch of boilerplate that we used to have to write as developers to say, "Okay, so this is there and this is there and, you know, could you please do that," isn't necessary anymore because there's a deeper optimization phase at build time. So then it ends up being developed better for developers just because they have to write less code as well.

Quarkus and Cloud-Native Java

Charles Humble: So, can you talk about a bit more about sort of how that works? Because as you say, you are doing the...as I understand it, you are effectively building, you are using the GraalVM I think, right? So, you are doing sort of Native Images, but I'm presuming you are not doing that when you are doing your normal development work because that would be too slow, right? So...

Holly Cummins: Native compilation is about... It seems to take about three minutes. Depends on the application, but that's a sort of order of magnitude. But one of the things about Quarkus, which is interesting actually, is it can run in two modes. So, it can run as a Native application or it can run on JVM, and Native, I think, is getting a lot of the headlines because the startup times are just dazzling. Like, I benchmarked it and it's faster than an LED light bulb to start up, it's just flat and incredible.

Charles Humble: It's kind of extraordinary, isn't it?

Holly Cummins: Yes.

Charles Humble: Absolutely.

Holly Cummins: When I first did my Native application I just, like, started it and I stopped it and I started it because it was just so magical. But what we do see, and that's all built on GraalVM. But what we do see with the Native applications, I think the gap will shrink, but with the Native applications, the throughput does tend to be lower. So, the startup time is just absurdly fast. The memory is quite small. So, if you're running in a constrained environment or if you're doing something like serverless or any kind of scale to zero, or if you wanna have kind of a cloud bursting pattern, then it makes sense.

If you're running an application, the way you've always run your application, Native maybe isn't the best choice. But the optimizations that the Quarkus team did to make it work well on Native, it turns out are optimizations for Java as well. So, if you run on JVM, some of those things like doing more at build time, getting rid of reflection I mean, it's faster on the JVM as well. So, we see even on the JVM, the resource consumption is about half of what an application that wasn't using Quarkus would be, which is pretty incredible.

Charles Humble: That's extraordinary, isn't it?

Holly Cummins: Yes. it's just so good. That's sort of across the board. The throughput is higher, not twice as high. I think, again, it depends on what application, but certainly throughput is significantly higher. Your memory consumption, everything, means that you can run in a much more economical environment.

Charles Humble: I want to pick that up in a second, but I'm curious about what the sort of programming model feels like, how you handle things like sort of dependency injection and that sort of stuff. Because again, it's quite a change from, you know, the sort of Java and J2EE for that old and even sort of early days of the spring. It's quite a shift in terms of the sort of programming model in some ways.

Recommended talk: Writing For Nerds - Blogging For Fun and (Not Much) Profit • Charles Humble • GOTO 2023

Holly Cummins: Yes and no. What they've done is, in general, they've stuck to the standards. So, if you're using MicroProfile then your MicroProfile application, that's the sort of the base programming model for Quarkus. So, your MicroProfile application will work fine. If you're using Hibernate, for example, the Hibernate team virtually sits next to the Quarkus team. So, again, you know, there's a nice integration there. What you can do though is you can do less of it. So, things like, yeah, just some of the boilerplate that you might have to do with Java EE or with MicroProfile, there's less of it. We have some libraries that, again, build on top of it just to give you that slightly slicker experience. So, for example, we've got a library called Panache, which builds on top of Hibernate.

It means that quite a lot of the things that you do with Hibernate, you end up having to do sort of in every application, let me have a method that gets everything, let me have...and it just auto-creates those methods. So, it's quite nice. One of the other things, and again, injection is core to the programming model. It's the MicroProfile injection and we do it for all sorts of things and in quite cool and clever ways.

So, for example, if you're using Hibernate in a database, in your application properties, you can configure where your application, your database lives. But if you don't configure it, because we understand what's in the application, we can look and say, you are using a database and yet you do not have a database. That will be a problem for you. Let me use Testcontainers to magically spin up a database because you're using the injection, you don't need to tell me about Testcontainers. You don't need to say, 'Please, could you use Testcontainers to give me a database?' We'll just find those injection points, put the Testcontainers database in there and then it just magically works." So, that's useful for testing, but it's also useful, again, in the Dev Mode. So, it means that you can just start going and you haven't done anything to define a database and you have a database. You don't wanna deploy to production like that. But, you know, it gets you a lot of the way.

GraalVM Reflection Challenges

Charles Humble: That's interesting. I was just thinking as well, so one of the things I remember reading in the early days of the GraalVM stuff was, you know, you can't use reflection. There were various other things, there were limitations that came along with essentially trying to run a virtual machine language natively, which is hardly surprising. I haven't followed how much that's kind of shifted and evolved with time. But in terms of... I mean, I'm presuming the sort of the big... I mean, you mentioned Hibernate, so I'm presuming the big libraries have done work to make themselves more GraalVM compatible. But how's that sort of story evolved? Because to be honest, I haven't followed it. So...

Holly Cummins: So, it's not my expert area, but the understanding that I have is that you do need work. You need to do things like declaring what happens by reflection and that kind of thing. What we've ended up doing for a lot of it is we've put that extra stuff in Quarkus. So, a question came around on one of our internal mailing lists recently to sort of say, "I'd like to do a Natively compiled application and I wanna use these libraries and could I do just the straight GraalVM?" And the answer was yes, but it will be an awful lot of work and you'll have to chase a lot of bugs and if you do it with Quarkus, it will just work.

We've sort of tried to take those extra things that you need to do. Because some of them are trivial. Some of them, again, need to be extra things that are done, extra declarations, but some of them are extra steps that happen in the build phase. So, that's hard to do without something a bit external and something that has those injection points in the build phase to say, "Okay, now let me look around and make sure that everything's gonna work". And we have seen that, yeah, sometimes when frameworks take those libraries and try and put them in GraalVM, it doesn't always work.

Charles Humble: Right. Yes. Yeah. Are there specific kinds of sweet spots for Quarkus? Are there specific places right now where you think that's, you know, absolutely the right kind of use case? Where would you recommend people pick it up and choose it over one of its competitors or one of the alternatives?

Holly Cummins: I mean, to be honest, almost everywhere. So, a lot of the conversation about Quarkus has been because of the Native mode focused on those serverless applications, that kind of thing, and microservices. But we are seeing it being used as well for the larger monoliths. At some point, if you have a very large monolith, you may end up with some classpath contention and conflicting dependencies, and that kind of thing if your dependency space becomes too big. And that so far is not the sweet spot for Quarkus because it does have a flat class path. But assuming you're not at that scale of dependency hell for almost everything else, Quarkus works well because it's the Java EE programming model, it's the MicroProfile programming model. But the resource consumption is so much smaller.

So, everywhere it will be a benefit. The benefit will probably be multiplied for things like microservices. So, going back to that discussion that we were having at the beginning, are we applying these patterns of thinking and are they perhaps creating problems? One of the things that we're starting to see now is businesses are coming to us and they're saying, "I've switched microservices and my business agility is great, but unfortunately my cloud bill has quadrupled, which I wasn't hoping for," because when you imagine that for each application there's the cost of the application and then there's the cost of the framework, and then there's the cost of the infrastructure, so your control plane and, you know, your nodes and that kind of thing. And if you have so many more nodes, then that means that your infrastructure tax is going up and sometimes, you know, your memory consumption and that kind of thing is much higher. And so, those organizations when they switched to Quarkus can then get their cloud bills back to a much more acceptable level.

Quarkus Compatibility and Impact on Environmental Efficiency

Charles Humble: I know you've also done some work, I mean, you sort of touched on this already, but you've also done some work looking at effectively the sort of carbon footprints of running Quarkus versus other leading frameworks we could name but probably shouldn't.

Holly Cummins: Exactly.

Charles Humble: Can you talk a bit about that? I know that's an area of interest, but how's that kind of worked out? What are you seeing? What are you measuring? Because measuring carbon is in itself quite a...

Holly Cummins: Quite a challenge.

Charles Humble: ...quite a challenge still. I mean, it's something as an industry we need to get better at and we're starting to look at. But it's an interesting, very difficult area in many ways.

Holly Cummins: There's sort of a side conversation thereof, as you say, measuring carbon is hard, it really shouldn't be because it's really important. Can we make this easier? So, some of my colleagues are looking at that. But when I joined the Quarkus team, we knew it was light. We knew it was really fast. We thought intuitively that should translate to being greener, but we didn't have any evidence for that. We hadn't done the measurements. And so, I did sort of a series of measurements. One was doing it by inference. So, if you know what cloud instances you're running on, you can work backward to an approximate carbon footprint. And that was sort of for real-life load over an extended period. And the other one was in a more controlled environment where we had instrumentation in the CPU and we could see exactly how much energy was being used and then translate that to carbon.

In both cases what was good is when you do a measurement in two different ways, you hope that the results are kind of consistent. That gives you a sense of confidence. And the results were pretty consistent. In both cases, we saw that the carbon footprint with Quarkus was reduced by a factor of about two or three. So, that was nice. But one of the things that was interesting and and surprising to us was when I first got the results back when we compared Native and JVM applications. So, again, we looked at Quarkus on JVM, Quarkus Native, more alternative frameworks on Native, and alternative frameworks on JVM.

Because Native has such tiny resource usage, you would assume that it has the lowest carbon footprint. And I think certainly there are ways that you can run it in which that is the case. You have it in a highly elastic mode. You scale to zero when it's not being used, definitely enables that. But if you just take it as I have a steady load and I'm running it, the footprint of Native, carbon footprint is higher than the carbon footprint of running Quarkus on JVM. And the same for other frameworks.

It's because there are a couple of things that contribute to the footprint. One is the memory, where it's much better on Native, but the other one is the throughput. So, you need to handle the same load, you need more Native instances. And so, then that means that overall your carbon footprint ends up being a bit higher. Yeah, it's interesting.

It's also I think, slightly annoying because it was such a counterintuitive result. And that means that when you're reducing carbon, you know, ideally you want it to be that you can just do what seems to be the intuitively right thing and get the right outcome. And here we kind of saw, you know, you do need to be a bit more data-driven and evidence-based and do the graph to measure it because otherwise, I mean, it's not a disaster either. If you're running on Quarkus, either way, you're better off. But if you wanted to maximize the greenness, you would run Quarkus on JVM.

Charles Humble: Yes. Which as you say, it's not what I would've thought either. Right? It's sort of...

Holly Cummins: No.

Charles Humble: And so, because there's so much of this, if you want to be greener, you, like, rewrite and rewrite everything in Rust or something. There have been some studies done on the relative carbon footprint of different programming languages as you've probably seen. I'm not, honestly, sure how much weight I would put behind some of those, but you know...

Holly Cummins: They're micro benchmarks.

Charles Humble: Right. Yes.

Holly Cummins: However, one interesting thing with that study that I will call out, again, it's a bit counterintuitive. Normally we sort of have this mental hierarchy where we imagine that the compiled languages are better and the harder languages are better. Which, you know, we sort of see in that result. So, C C++, and Rust were all extremely efficient. There wasn't much to choose between them. And then we imagine all a language like Go, which is kind of hard and quite compiled, that's gonna be better. But actually, Go had a higher carbon footprint than Java and that was even before looking at something like Quarkus, which halves the carbon footprint of Java.

So, it does mean that definitely if you are using Java. And overall I think they looked at, like, maybe 60 languages and Java was fifth. So, Java, even before Quarkus is really... Because it's so vast, because it's just been so optimized and the GC is optimized, the G is optimized, it means that if you're using Java and you wanna keep using Java because you like that programming model, you're in an okay place. You can keep doing that.

Charles Humble: I found that genuinely surprising how high it was. I was also kind of amused by how terrible Python turned out to be, which was also kind of a bit of a... I mean, you kind of figure it, it's not gonna be great, but it was sort of startling how bad it was. Given that obviously, we're at a tech conference and everyone is talking about machine learning kind of in the coffee spaces and whatever because of ChatGPT and all of those sorts of things. It's just kind of interesting to reflect on how much particularly data science-type work is done with things like Python and R, I suppose, and that kind of thing. And a lot of those languages are quite bad from a carbon point of view, which is kind of interesting I guess.

Holly Cummins: I think the data science workloads, are not quite as bleak as they seem from those micro benchmarks. Because I think a lot of the heavy lifting is outsourced away from the actual Python runtime. But still, not all of it is. And so, you know, it is worth thinking about it.

Charles Humble: I think the other thing now of course, like, training a large model is incredibly expensive.

Holly Cummins: Incredibly expensive.

Charles Humble: So that means that people don't do it that often. There are only really three or four companies that have the kind of resources and the money to do that. And then everyone else is building on what they're doing. So, it may not be quite, as you say, it's not quite as bleak as we might think it is. Thinking about sort of sustainable software more generally, are there other things that you think as developers we ought to be thinking about when we're thinking about our kind of environmental impact on the world and sort of thinking about demand shaping or all those sorts of areas?

Holly Cummins: There are some really interesting and cool and kind of challenging and not yet built technologies in this area, which is always exciting because everybody loves a problem and there are some easy no-brainers in this area, which is also good. Sort of the first thing to think about is where your workload is running in general. And if it's in...not all regions of the world have the same electricity mix. In some, it's dominated by coal. In others, for example, the Nordics, it's much more focused on renewable energy. So, in a lot of cases, you can move your workload and slash the carbon footprint for no work at all. And often as well the hosting costs can be cheaper in an area with clean energy than one with dirty energy. So, again, it's kind of why wouldn't you do that?

Sometimes there are latency reasons not to do it. There's a site called Electricity Maps and you can look and you can see. In most areas of the world except for maybe Asia-Pacific, where with the energy mix overall is a bit more challenging, there's something that has acceptable data residency and acceptable latency that will be on greener energy. So, that's just the kind of no-brainer.

But then as you say, the demand shaping is the next thing of a lot of renewable energy tends to be intermittent. So, I think a lot of us think, "Oh, coal run this overnight," and that will be greener. But not all of your energy is solar, right? We're starting to see how this kind of carbon-based dispatching where you can get the real-time information that says, "Well, the sun isn't shining in Denmark because it's Denmark, however, the wind is blowing in Scotland because it's Scotland. So, why don't you run it on the wind energy in Scotland?" Whereas tomorrow the sun will be shining in Sweden so, then you can, you know, move your workload there.

As long as you've written your workload in a modern way where it has that auto-potency and where it can be scaled down and up and moved around, then again you get your disaster recovery kind of for free because you've proved that you can take your workload and pop it up in various places and it works, which is good. The business will thank you and you're also lowering your carbon footprint.

Charles Humble: The other thing that's sort of related to that, I think, is thinking about do you need all of your workloads to be sort of real-time and that kind of thing. And going back to, we used to do a lot of stuff in batch when I started in the industry, we kind of went away from that because computers got quick and fast and cheap. But maybe you can batch something and, as you say, run it somewhere else when the sun is shining or the wind is blowing or whatever renewable source you've got, but you don't need that kind of real-time thing, which is maybe something to factor in.

Holly Cummins: Yes.

Why Java?

Charles Humble: The other thing that I was sort of interested about, so you spent a lot of your career one way or another working with Java. So, what is it about Java itself that you like? Why are you sort of drawn to it as a language and an infrastructure?

Holly Cummins: I mean, language is such an interesting question because I've worked with Java for so long, I speak Java. And I was quite surprised a while ago when I was working with people who were coming from JavaScript and I was watching them trying to program Java. I was pair programming with them and they were stumbling. And I was thinking, "Why is this not obvious to you?" And so, similarly, you know, when I go to some other languages, sometimes I find it hard and the people who are more accustomed to that language say it's easy. So, part of, if I'm honest, part of the appeal of Java for me is that kind of incumbency and inertia. But on the other hand, when you look at the Java ecosystem as a whole, there's so much to love about the language.

I think it has a nice combination of stability. So, I was watching someone a while ago, they did a demo and they took code that was from 2008 and they ran it. Not only was it... It wasn't "Hello, World!", it was using their product and using the library and it still worked. And there are not many ecosystems where you could take code that ancient, that was actually written before some Java programmers were born realistically, and it just worked. And yet at the same time, the Java ecosystem is not stagnant. So, you have the emphasis on backwards compatibility, but also the emphasis on forward progression.

So, if you look at GraalVM, for example, that's just such a huge change and it's so exciting. And then if you look at something... And that's, you know, sort of in the JVM or, you know, it now is. But then if you look at something like Quarkus in the sort of, you know, the ecosystem that's built up around Java as well, you get all of these incredibly exciting cool things and I think that's exactly what you want, isn't it, of type safety to save you from yourself and stability to save you from the past or the future depending which way you're trying to run it, and yet forward progression so that you still get the new shinies and it still just keeps improving. And you still get that little endorphin rush as well. If you go to a new Java version, you're like, "Oh, this is cool."

Recommended talk:5 Tricks To Make Your Apps Greener, Cheaper & Nicer • Holly Cummins • GOTO 2023

Charles Humble: I think I would add as well, that Java has a particular quality about the way people write Java code. Like, everyone's Java code looks very similar, which means you can pick someone else's code up and follow it and make sense of it in a way that is not, I think, true of all languages. I'm not quite sure what it is about. I think some of that is probably when people talk about sort of the basic verbosity and the language and whatever, it's sort of slightly related to that. I think that there isn't a lot of sort of magical, the sort of meta template programming type stuff from C++ or Beast or any of those things. You're like, "I've got no idea what this does anymore." Java was, like, a very easy language to follow.

Holly Cummins: There's not the technical capability and there's not the culture of saying, "I could write this in 10 lines so it's understandable. Or I could write it in six characters to show that I'm a rock star." And, you know, you just don't get that in Java.

Charles Humble: I think that's interesting too. So, in terms of what you are working on at the moment and what you are doing at the moment, what's kind of exciting to you that you're doing? What's sort of motivating you at the moment?

Holly Cummins: So, a lot of what I'm looking at now is that sustainability, and looking at Quarkus and sustainability, and looking at sustainability in more general. Another thing that I'm looking at again is going back to the microservices and how can we deploy microservices with confidence. Because if we can't deploy microservices with confidence, what is the even point of microservices? And I think contract testing is a huge part of that. And contract testing isn't very widely used. And so, I trying to figure out, well, why isn't it widely used? What are the barriers to it? This is partly just talking about it, but I think it's also more fundamental why aren't we all contract testing? What do we have to change?

Charles Humble: Great. Well, thank you so much. Been lovely to chat with you.

Holly Cummins: Yeah, my pleasure. Thank you.