
Optimizing Cloud Native Java
You need to be signed in to add a collection
Want to master Java performance in cloud native environments? Watch Ben Evans discuss key insights from his latest book, Optimizing Cloud Native Java, with Holly Cummins.
Transcript
Intro
Holly Cummins: Hello, and welcome to this episode of the GOTO Book Club. I'm here with Ben Evans to talk about his most recent book. And I should say this is definitely not your first book, Ben, is it? I think this is one of about five or six books. Do you know how many you've done?
Ben Evans: ...think this is actually book seven. And it's kind of interesting because I do actually have a prop, you see this is the book. You've...
Holly Cummins: See, I've got it too. I've got it too.
Ben Evans: So, this is called "Optimizing Cloud Native Java." If you look behind me, sort of here-ish, you can see that on the wall is actually the cover of the first edition, which was just called "Optimizing Java." And there's actually maybe we'll get into it in a bit and I'll explain why the change of title and some of the bits that are there.
Holly Cummins: See that's cheating, is to publish the same book under two different titles.
Ben Evans: Well, that was actually the publisher's idea, not mine. But as we'll explain, it does actually kind of make sense because the world has kind of changed quite a lot since the original edition came out. So, there we go.
Holly Cummins: We'll come back to that. Ben, I've known you for many years, but because I've known you for such a long time, I hadn't actually ever read your biography. And so when I was preparing for this, I did read your biography. And it included some things I knew about like co-founding JClarity, and then sort of popping up at all sorts of conferences and that kind of thing. But it had a whole bunch of things that I didn't know about. So, like you've been a chief architect for Deutsche Bank, and you've been a web developer, and you've been a performance engineer, and you've done all sorts of things. So, what else have I forgotten? How would you introduce yourself?
Ben Evans: I've always been interested in, well, architecture and performance, and the interplay between people and systems. Now, increasingly, that's led me in the direction of thinking about performance, and of course, performance, you need data. So, it's become a very sort of empirical view of the world of software. And of course, our systems have become more complicated.
That's naturally evolved into things like observability, which is where I've spent the last six years. I mean, arguably, JClarity was also an observability company. But it really was when I came to New Relic in 2019, that things really started to move very strongly in that direction. And I see that as a sort of continuation of my performance work and by focusing more on the tooling than I had previously. I also think there is something in that I like to build tools. I like to make things which other software engineers are going to use. I mean, it is fun to implement features, but I'm not really as much of a feature sort of person. I like to build capabilities and tools for other people. And is there a connection to the other things that I really like to do, to write and to explain and to teach people? Maybe, those are capabilities for people as well as capabilities for systems. I don't know, maybe.
Holly Cummins: I suppose it's all about enabling people and enabling others, isn't it? And being that multiplier.
Ben Evans: Yes, absolutely. I mean, I'm very much of the opinion that if there is such a thing as a 10x engineer, it's someone who makes other people more productive. I believe that's the only way to really do it. I don't really think that there are people who can output 10 times the amount of code as others.
Holly Cummins: Well, I think there are, but they perhaps shouldn't. But that's a whole different conversation.
Ben Evans: Well, indeed. And then the question is, how much time does the rest of the team have to spend unpicking what they've done at great speed?
Holly Cummins: Exactly. But we should also introduce your co-author, who's not here, but we should not forget him. So, can you give a bit of an intro to Jim and how you started working together, and what he does?
Ben Evans: Jim, James Gough is a Distinguished Engineer at Morgan Stanley. And a funny story, I actually hired him into his first job out of college, which was also at Morgan Stanley. I was working there at the time. He actually left Morgan Stanley and came back, I think, maybe even twice. So, it hasn't been a complete unbroken run for him there. But I actually interviewed him as a graduate, just coming into the Morgan Stanley graduate program.
The style of interview at that time, and I expect it's still similar, is that when you're interviewing for a particular position, you're not trying to see just what the level is. You want to keep going with people to see where they are. Because I believe in this as part of my interviewing style, I just want to see how good you are, really at anything. Because if it's any technical subject, even if it's outside of what we're hiring for or whatever, I believe in the transferability of skills. I believe people are teachable. So, if you can be this good in this area, even if it's not quite the right one, well, I can get you there in the areas that we do actually need. So, what that translates to in interview terms is that we'll just keep going.
We'll keep talking and I'll be asking more and more difficult questions, including things that I won't expect a graduate engineer to get. And then I'll start asking things which should take maybe one or two years of experience. We'll just keep going. We'll find your level and where you top out. And with Jim, it was extremely difficult because it took me quite a long time to hit his top level. The question that I finally broke him with was the details of how threads are implemented, the differences between Solaris and Linux. And this would have been about 2005 or 2006. And he just, yeah, I remember him shaking his head and going, "No, sorry, this is it. This is my limit." Now, he comes out of that interview thinking that things have gone terribly and he's like, "I'm never going to get a job in this industry." And I'm writing the form up and I'm putting, "If we don't hire this person, I'm quitting because our hiring process is terminally broken if we can't bring Jim on." It's different how we have those two takes. And then when he was hired and he got through the grad program, I met him again at a company drinks reception and we've been friends and colleagues ever since.
We've helped each other out in various ways over the years. When we did the first edition of Optimizing Java, he was a very natural person for me to talk to because of our shared interest in performance analysis and the things which go on under the hood. That's really one of the things that I like to do is I like to take subjects which are a little bit frontier, a little bit esoteric, things which aren't necessarily in the mainstream of developers and try to bring them and break them down. I mean, I'm not an OpenJDK committer. I don't necessarily commit a lot of technical work into those frontiers. I like to understand them and kind of popularize them and explain them because I think a lot of people are interested in how these things work. I think that increasingly, especially as performance in the cloud becomes even more important, that more people will need to understand them as well. So, I think there's a good space to kind of explain that, which is where the book comes from.
Recommended talk: Cloud Chaos & Microservices Mayhem • Holly Cummins • GOTO 2022
Who Is Optimizing Cloud Native Java For?
Holly Cummins: Who do you think the book is for? Because there's sort of, I mean, it's a big book and there's a lot in it. And certainly, some of what's in it is very low level. And some of it is, you know, sort of low level in terms of like, you know, understanding the foundations that you need to know. And then some of it is really quite, I think, practical and high level in terms of like here's the tools. And so do you envisage this as sort of something that you hope everybody in our industry, you know, is buying and understanding? Or is this a book for sort of when you get to a certain level, this is what takes you to that next level or...
Ben Evans: I think a bit of the latter. I would hope that there is something in it for a wide range of, well, maybe not beginners, but maybe from intermediate upwards, I would hope there is something for everyone. The aim of the book, and this isn't the first book I've written in this style, is that it's a starting point. You know, your journey begins with the book, but it doesn't end there. It should have lots of paths that you can follow from it. It can act as a jumping-off point.
Now, it is fair to say that some people don't like that style of book. I have had previous books, complaints, and reviews where it just didn't gel with what the reader was expecting. They wanted, you know, very, very in-depth things. But those types of books I find incredibly difficult to write, because it's such a fast-moving field, and there's so much to cover, that you can't possibly satisfy everyone, because everyone has their own slightly different journey.
I think the style of providing some starting points and providing resources, providing references, and letting people explore where they want to, where their desires take them, I think is better overall. Although, of course, it doesn't satisfy everyone.
Performance Tuning: Art & Science
Holly Cummins: I think that probably perhaps particularly makes sense for performance, because I was thinking as well when I was thinking, you know, about this conversation, I was thinking about when we first met each other, because I think we first met each other at Devvox Belgium, and I think it was about 15 years ago. I think I was speaking about performance, and then you sort of came up to me with a performance question, and you showed me a bunch of charts. And I just looked at these charts, and my heart just sank. Because, you know, and there was a couple of reasons. One was that I was fairly sure that even, you know, at that point, I was speaking about performance, but I was fairly sure that you knew more about performance than me. But then as well, I think, you know, one of the things that you talk about in the book is that there is no magic go-faster button. And, you know, there isn't a thing where you can say, "Okay, well, in order to go faster, here's the six steps that you have to take. And if you follow them in these orders, your application will always be faster." There just isn't that recipe. There isn't one command line to rule them all. And I think that, you know, the way you focus on the principles and the techniques works for that domain.
Ben Evans: I think so. I think what you're getting at there is something that's very true and very, I think, unique about performance. Performance is different to application development. That's why I always kind of consider it really to be related to architecture because it is a blend of things which are technical and things which are much softer than that.
I will always remember during my time at Deutsche Bank, when I was working in Listed Derivatives, that we had a system which was a features and options system, and it was underperforming and the users were complaining. And I had this program of work that I wanted the team to implement, which was going to work on improving performance. And it suddenly dawned on me that this is sort of a people problem as much as anything else.
Because, consider these two scenarios. You have one scenario where someone comes to you and says, "It's slow, make it faster, right?" So, you make it faster, and they go, "It's still too slow, make it faster again." Okay, well, you can go around that loop many times. And what you have is a dissatisfied customer.
Alternatively, you can, when you approach, when someone says it's too slow, you say, "Okay, what do you mean by that?" And then they can say, "The average latency is too large, okay?" Average or 90th percentile, right? And so immediately you're engaging in a conversation, you're bringing them with them. Somehow they are being drawn into the problem, they haven't just shoved it on you and said, "You deal with this." They are engaged in the conversation at this point.
Then you say, "OK, so we're going to bring in the 90th percentile by 20 milliseconds, right?" Because they probably don't know how much they want to move it by. So, that means you get to define the target. So, always pick something you're pretty sure you can hit, right? So, you bring it in by 20 milliseconds, and they say, "Well, it's still not good enough.”“OK” you say “hat should we do? Do you want to do another 20?" "Yeah, we do another 20, right?" But what that means is that in the second case, you have two successfully completed deliverables, and you've been engaged in the process throughout. Unless the person is really difficult to work with, hopefully they'll see that as progress and as positive engagement. So, that sort of psychology can be important.
It's also the case that (and we actually did this at Deutsche) - I knew that there were lots of these small performance enhancements that we could make, but I felt that if I trickled them out and just bundled them with feature releases, the system wouldn't necessarily seem any faster.
So, what do you do? Well, you take a high-level meeting and you say, "Okay, well, we hear your concerns, and we understand and we're going to fix this. But we need you to understand that we're going to pause feature development. And I need to do a performance release, which has no features in it." You know, they're very much, "We want our features too." "Okay, well, you know, if that's what you want, but you can't have both, we only have so much engineering resources." So, they actually said, "Yes, let's do the performance release." And sure enough, by putting all of the small enhancements together, we were able to move the needle by about 30% or 40%.
They came back to it. "Yep, this is visibly faster." Okay, so, we worked through it, we got to, a decision together about the trade-off, and you were actually able to see the performance going up. So, the fact that there are those bits of psychology and softer stuff mixed in is important.
It's also the fact that – as we discussed in the book – this is empirical science. You do need to be able to handle data. And performance data is somewhat different to other forms of data that you might encounter. It's actually for Java performance data, it's very noisy, and getting clean data is hard. It's also easy to be misled by small-scale performance measurements. Micro-benchmarks are really hard. Partly, well, for all kinds of reasons, which we discuss at length in the book. And so it's actually much easier to benchmark entire systems. So, thinking about things which directly relate to a user's perception and experience of the system is always much better than trying to focus on those small effects. Just if you...
Holly Cummins: It's so counterintuitive, isn't it? Because we're taught, you know, break the problem down, break the problem down, break the problem down. And then you break the problem down, and you write a benchmark. And then you realize that that was the exact wrong thing to do.
Ben Evans: I don't know if you recall these, but Google used to have a set of web benchmarks, not for Java, but for JavaScript, called Octane. And they actually had to retire them, because they found that in a large number of cases, applying what the benchmarks told you to do produced worse overall performance.
Because,, the idea is that performance should be reductive, you should be able to break it down. But the fact is that with a sufficient amount of complexity, our software systems become emergent.
The example I always think of and go to is if I have a bucket of water, it's made up of water molecules. Individual water molecules don't have surface tension. So, where does the surface tension come from? Or the specific heat capacity? It's buried somewhere in the quantum mechanics. But somehow, when you have a bucket full of it, the property just emerges as a result. And I believe that our software systems now are sufficiently complex that there's that aspect to them as well.
There's also the fact that particularly for complex managed runtimes like the JVM and .NET as well, we have the problem that the low-level subsystems are not independent of each other. So, for example, we can ask a question like why is reflection slow?” and the answer is not easy to figure out - especially once the JIT compiler gets involved as well.
And when we talk about this, I might mention it in "Optimizing Cloud Native." My other book, which I've got, is "The Well-Grounded Java," Second Edition, which is this one. We'll definitely talk about it here. Because one of the things that people might not know is that the implementation of reflection in Java changed with Java 17. Was it 17? I think it was 17. And originally, it was implemented in a very low-level way, which essentially used native code. And until a threshold was reached, and when you called a reflective method more than, I think, 20 times, then it would actually auto-generate bytecode and produce like a little bridge method in bytecode. Now, there's all sorts of problems with that, because it has to be specially marked to handle verification and to go through the class loading process.
You have to say, "Well, this is a very simple, special bytecode. You don't have to worry about it. You don't have to security check it," and all that kind of thing. So, it adds a lot of complexity into the internals of the JVM. Whereas with Java 7, a technology called Method Handles was introduced alongside Invoke Dynamic. And that technology has been maturing ever since. And it is now effectively a modern replacement for reflection. And it's great to say that. And it's now very performant indeed. Now, of course, you can't take the reflection API away, because so many things rely upon it, but you can change the internals. And in fact, what's now what's happened is that the existing hacky JNI native calling and then spin some bytecode approach has been replaced all the way through by method handles. And it's completely opaque to the user. There's nothing which has changed from the point of view of any calling code, but the internals are completely different. So, now we're first with the performance question. What's the performance difference between the two? The answer is nobody knows. Really. And even if you talk to some of the folks that work on the VM, it's very, very difficult to give a general answer to that question.
Again, you can find benchmarks and you can do microbenching but within a very limited range of circumstances. And if you take an application and run it with the two different implementations of reflection, which you can control using a command line switch, it doesn't rise out of the noise. So, it's almost impossible to make any kind of general statement about the two. That's one of the things, one of the facts of life that you get when you start delving into performance at this level. There's also the thing that I'm recalling Martin Thompson, the performance engineer, and some of the great stuff he's done, where with a bit of profiling and a bit of just intelligent thought about what the important parts are, you know, you shouldn't be looking for shave 5% or 10%. Martin frequently finds cases where there are orders of magnitude saving. And so I think that's a great way to look at it. How much do you need to move the needle by? If you only need to move it by 25% or 30% that should be fairly achievable.
And if it needs to be more than that, then you need to be thinking about, you know, design as much as anything else. Getting the right numbers also is very important and also measuring what matters, not what is easy to measure. We see this, I was doing some work with the open telemetry folks and helping to define metrics for open telemetry metrics and participating in the standardized metrics that are now available as open telemetry for the JVM itself, as well as application level metrics. And one of the things that we found was not every metric that we could generate and collect actually made sense. So, for example, there was something like, I forget, it may have been a 15-minute CPU average. Okay, well, in a Kubernetes environment, if your 15-minute CPU average is high enough to trigger an alert, your pod is already dead. So, why even generate it? Why collect it? Why do anything or even define that at all? Because all it's doing is it's eating bandwidth for a number which is never going to be any good to anyone. So, it is important to actually focus on what is actually relevant for the problem you're trying to solve.
Recommended talk: Java, How Fast Can You Parse 1 Billion Rows of Weather Data? • Roy van Rijn • GOTO 2024
The Evolving Landscape of Java Performance
Holly Cummins: And I guess that goes back to sort of two of the things that we were just talking about as well. So, one is, you mentioned that if you're doing Java performance, the JVM is a complex environment. But of course, now none of us are running just in a JVM. We're running a JVM in a much larger system. Then that comes back to the sort of the updated title of the book. When you went from the first edition to this one, like how... Because I sort of naively, I imagine that once you've got a book, then you can just produce updated editions. And it's a tiny amount of effort. But I have a feeling that it was actually quite a large amount of effort for this update. So, what was the difference?
Ben Evans: In the case of optimizing, it really is about a one-time shift between... When the first edition came out, the original optimizing Java, I think that was really probably the last time where you could write a Java book and have it find its audience by speaking solely about single JVM performance. That world is gone. It's not coming back. Our applications are clustered and we are well past the tipping point now where most of them live in the cloud. That's just the world we live in. So, you need to see not only a single JVM performance. It's a yes and. It's not that that stuff doesn't matter anymore. It very much still does. It just needs to be seen in a broader context and combined with other things which relate to cloud-scale applications. Now, more generally, this idea that a book can be reissued and is creative, again, it's superficially compelling. But one of my other titles is, where is it? This one, "Java In A Nutshell." This one, by the way, is available for free courtesy of Red Hat.
We will include a link in the materials where you can come and download a copy of this one for free. Thank you, Red Hat. That is the eighth edition. Now, the first five were written by David Flanagan, who also wrote "JavaScript, The Definitive Guide." And it's a great book. It's been around since Java 1.0 if you can believe that. And the first five editions cover the first five versions of Java. And David decided he didn't want to work on it anymore. And I came along to do the 6th edition, which corresponds to Java 8. So, we broke the nice numbering there. But it's an interesting project because it does have this very creative style or it did when I first took it over. But what that means is that the more time passes and the more editions you do, the sort of historical approach of explaining when different things turned up, makes less and less sense. I mean, if I'm a Java developer in 2025, do I care whether a feature was introduced with Java 4 or 5? Probably not.
Holly Cummins: Your grandparents Java.
Ben Evans: God, it really is, isn't it now? Instead, I have tried to change that and to introduce it in a way which makes more sense to the modern reader. Now, that's more work for the author. But I feel like that provides a better learning experience. And that's what we've done within a nutshell. Starting from version 7, I've started to remove some of those historical details and blend it. But it's been an interesting journey working on somebody else's titles versus working on things which are purely my own. Or with a co-author. So, yeah, I don't know which I prefer more. I think probably one of the biggest learning experiences I had while writing was actually when I was writing "Well-Grounded," because I don't know if you can see this. I mean, this is the first edition we've covered. This is the second edition. Let me compare them. We just put them next to each other., andou can see the difference in size.
Holly Cummins: That neatly shows the work.
Ben Evans: The second edition was my lockdown book. I started writing it with Martin Verburg and brought on a different... And that's my colleague at New Relic, Jason Clark. And we started writing and I thought, "It's about 30% to 40% more new material." And I just wrote and wrote and wrote and the lockdown came to an end. And I'm like, "We don't seem to be finished yet. Why is that?" And the real answer was because, although we were writing, our percentage completion was not actually increasing because the book was just expanding as we went. So, eventually, we had to call a halt to it and realize that actually, no, you can write, but you have to actually finish what you write.
Holly Cummins: And so for this one, so this is the "Optimizing Java," upside down. So, it's about that thick. So, it is not a small book. Do you have the previous one to compare?
Ben Evans: I've got the first edition on my bookshelf. My books live on that part of my bookshelf, which is just out-shot here. So, this comparison, I think, is less stark. This edition is longer, but it's nothing like it is pronounced.
Holly Cummins: No, based on that, you did no work at all.
Ben Evans: Absolutely.
Holly Cummins: So, what did you change?
Ben Evans: Well, so one of the things that we did is we moved out quite a bit of material to an appendix. We also did things like, we had some stuff about open source libraries for high-performance messaging and stuff, which was removed because we needed to make space for things like the observability content.
So, we have three chapters which deal with observability, which is a huge topic. One of the other things that we also do is we introduce concepts of deployment and of Kubernetes and so forth. Earlier on in the book, which wasn't in the first edition. And I think these days, Java developers have tended to lag behind other developers in other languages, simply because in our part of the industry, people have tended to think of themselves as developers rather than DevOps for longer. Now, I think that's coming to an end. But what it means is I think there is a knowledge gap about Kubernetes and similar technologies, cloud native tech. So, I felt that it was important to introduce some of those concepts as part of the journey that we went on.
The other thing, of course, is that we have a major example that we developed throughout the book, which is what we called Fighting animals. And it started, it just came to me that a lot of the examples that were out there for dealing with microservices and deployments, there all, are in many cases much too simple. One thing which I was shocked by is how many of them rely on quirks of loopback networking. And as soon as you put them onto something which has an actual real network, it stops working. And it's actually in many cases, very difficult to actually get them to work. So, I determined that every example in the book would be tested across a real network. So, you can't see it's not a shot. But in addition to my lovely match, I have a Linux machine which is there. And that is the server that runs all of the examples. So, I would connect from this to my client machine across a real network. And that basically, exposes a lot of bugs in examples and config.
And it means that the examples are properly tested and could be generalized for people who want to start deploying the first sets of microservices. Now, fighting animals is this thing where it originally came from. There is a certain large entertainment conglomerate that is known for being litigious that shall not be named. And they have a number of well-known properties of different universes, shall we say. And I thought it would be fun to have a combat game where you would fight different characters from different universes. So, each universe is represented by a separate microservice. So, you can build up, you know, a non-trivial topology of microservices, which of course is then helpful for showing things like distributed tracing. You know, and it's just a bit of fun. But of course, when we started writing the book, I had to talk to the publisher, the publisher was like, "Yeah...
Holly Cummins: No.
Ben Evans: ...hard to use that." So, I came up with the idea of, "Okay, we'll replace the fictional universe. Lovely and gorgeous ideas with something which is like animal clades." So, what we have is we have, you know, the top-level gateway service. And then we have, there's one which represents mammals. There's another one which represents fish. And then the mammals. So, if you pick fish versus mammals, say you go to the fish service, and then you go to the mammal service, and then the mammal service will then send you off to one of, you know, is it a feline or a mustelid or something like that. So, you have a tree structure, which is non-trivial. And that then produces better results in something which only has one microservice or just like a one-level structure and multiple levels of structure actually do help. So, what we're trying to find...
Holly Cummins: And your own brand for O'Reilly as well...
Ben Evans: Of course, yes.
Holly Cummins: ...of the animals.
Ben Evans: I hadn't even thought of that. That's great. And what I like about that as well is I think it provides the right balance of complexity. Because, you know, with a lot of the cloud stuff, again, it comes back to this theme that a lot of the examples out there are in many ways too simple. So, you need to have things which actually encourage people to go beyond the basics. How to use them all very well. But quite often you find, you come to the end of one and then you think, "Okay, well, now what? How do I apply this? How do I actually take this and turn it into building a real system?" Again, which relates back to the subject of architecture. Architecture is how we do that. But it can be difficult to take architectural principles that you may understand, and see how to apply that to a new technology or a new set of concepts that you're learning.
Recommended talk: Structured Concurrency in Java: The What & Why • Balkrishna Rawool • GOTO 2023
Concurrency in Modern Java
Holly Cummins: Can we talk a bit about concurrency? And that's not the sentence I say very often, because I try to avoid thinking about concurrency whenever possible. But I think, again, you know, increasingly in our modern world, you just can't avoid thinking about concurrency. So, what do you think about concurrency? How do you approach it?
Ben Evans: I see concurrency as an aspect of performance. If you don't need performance, you know, and to saturate your hardware resources, don't use concurrency. That's the first rule is, you know, why do you need it? And only if you genuinely do, should you. Now, there's a couple of different things here. These might be kind of edge cases or small quirks, but let me see if I can process. So, there is a slight oddity, which people don't necessarily appreciate. Let's rewind back to 2003. So, most machines are a single CPU. And yet we have Linux, which runs fine on them. You have different processes. And the different processes are controlled by the scheduler, which time slices. And you can even have different threads within the same process, you can have multi-threading, even though there is only one CPU. However, when you have multiple CPUs, there are multiple places where execution can happen at the same time. Rather is because, you know, relativity and stuff. That actually isn't a well-defined concept, but it's good enough for now.
That enables a class of algorithms called lock-free algorithms, which are not possible on a single core. So, a classic example of this is a spin-lock. Because normally when you're honest, like you're doing a context switch, if you want to take a lock, you say, "Okay, I'm blocked on this lock, take me off the core, swap in something else, which is going to make a change, and maybe it'll unlock me. And then when I get back on the CPU later, I can carry on with what I'm doing because now I'm unblocked." That involves the operating system moving things in and out. If we have two separate CPUs, the thing which is blocked can sit on the CPU and say, "Has that bit of memory changed? Has it changed? Has it changed? Has it changed?" Because there is another place for execution to happen, it might have done. If there's only a single place where execution can happen, it hasn't. So, that is the fundamentals of a lock-free algorithm, which are based, you know, and related to this is also things like compare and swap.
And those sorts of features enable modern concurrency, including modern concurrent garbage collectors, which is interesting, because if we look at the statistics, and probably the best numbers we have are from the New Relic survey of JVMs, which is a survey that I initiated when I was at New Relic in about 2020. And that basically is based on, I think, these days, about 65 million production JVMs. So, according to Gartner, that is about 1% of all production JVMs in the world at any given time. So, that's a good statistical sample number. What it shows us is that overwhelmingly, for things that are containerized, Java applications are running in single-core containers, which means that regardless of whether you think you're running G1, you're not. Because G1 is a concurrent algorithm. So, an awareness of concurrency and the low-level details, all of these things kind of interplay with each other. Of course, one of the really hot topics is virtual threads, which I think are an amazing achievement in 2021. They're great as they are.
They have been getting better. In Java 24, we also now have one of the last major performance hurdles with virtual threads has been eliminated, which is the synchronization block on them. That's gone in '24. And two other features, which I really hope are going to go final for Java 25, which comes out in September. Hopefully, be the long-term support release, which are structured concurrency and scoped values. And with those three things put together, we have a really powerful next-generation concurrency story. One of the many things that's interesting about virtual threads is another example of the thing we talked about earlier with reflection. The way that it works relies upon an implementation change that was done back in Java 14 or 15, which was to re-implement the socket IO. So, the blocking socket IO has been completely changed underneath. And even if you use the original Java IO socket blocking API, underneath the actual IO is non-blocking. So, that's the trick. That's how virtual threads work. Because even when you're using the blocking API, well, it dispatches the non-blocking call and then swaps out the virtual thread.
The way that you can change the implementation and the underlying parts of stuff is actually, I think, a key feature of how we can use run terms like Java and .NET to make these changes without affecting other people or the applications that run on top. There's a great deal of really interesting work in the virtual threads piece. I talk about it in the book. I'm going to share some links as well, which have links to articles about how it's been implemented. And the implementation has changed a lot over the years, but what I'm describing is what goes into 21. Other than that, I think the virtual threads, yes, non-blocking algorithms, thinking carefully about how your deployment patterns work. Those are some of the things that I think really do move beyond classic texts, like, of course, Java Concurrency in Practice, which is showing its age in a couple of places now, but it's still such an absolute classic. I hope that Brian Goetz would get around to updating it one day, but I think he has a few other things to do.
Holly Cummins: One or two.
Ben Evans: One or two.
Holly Cummins: I think this has been a really nice gallop through some of the book. And it's such a good book and it's such a useful book for people whose job is performance and also people who maybe don't know whether they need to know about performance just in terms of the foundational aspects and then the tools that it provides as well. So, I think with that, yeah, I think we're done unless you have any last 10-second words.
Ben Evans: No. I mean, thanks for having me this morning. It's always nice to come and talk about this stuff. And, yeah, I hope people, if they find the book interesting, I hope you like it and hope it stirs some thoughts and helps you on your journey.
Holly Cummins: Fantastic. Thanks, Ben.
Ben Evans: Thanks, Holly.
About the speakers

Ben Evans ( author )

Holly Cummins ( expert )
Senior Principal Software Engineer on the Red Hat Quarkus team