Home Gotopia Articles Intro to Roc & I...

Intro to Roc & Innovation in Functional Programming

Join Richard Feldman and James Lewis as they unpack a new programming language and what it brings to the ecosystem. They navigate through the nuances of language selection, exploring the sweet spot between fun and standardization. From Elm's role in front-end development to Scala's adoption patterns and Dart's transformation into Flutter, the discussion takes you on a journey across diverse programming landscapes. Discover the ins and outs of Roc, a fresh face in the coding scene, and the driving force behind its creation. Learn about its architecture, design principles, and standout features, including parsing strategies and a candid comparison with other languages. Explore the excitement around Roc's innovative traits and its knack for performance optimization, unveiling its potential in the dynamic world of functional programming.

Share on:
linkedin facebook

About the experts

Richard Feldman
Richard Feldman ( author )

Functional programming language expert & author of “Elm in Action”

James Lewis
James Lewis ( interviewer )

Software Architect & Director at Thoughtworks

Read further

Trade-offs in Language Selection: Fun vs. Standardization

James Lewis: Hello, everyone, and welcome back to GOTO Unscripted. My name is James Lewis, and I'll be talking today, I'll be in conversation with Richard Feldman. Welcome, Richard Feldman. I think we're just gonna explore some ideas around things like language design, because Richard will introduce himself in a moment, but is the creator of the programming language Roc, and author of the wildly successful, I'm told, he paid me to say that, "Elm in Action." But, yeah. So, we'll just have a conversation about languages, and programming in general, I guess. But, welcome. Richard, maybe you could introduce yourself.

Richard Feldman: I created the Roc programming language, which is still kind of a work in progress. But I wrote the book "Elm in Action," and also spent a lot of time with Rust. And later in the conference, I'm gonna be talking about Rust and Zig, and, sort of together. Very interesting language design. You were telling earlier, before the cameras were rolling, about how you were there when Rich Hickey announced Clojure, at the conference.

James Lewis: I was. I was at JAOO, I think it was still JAOO at the time, in Aarhus, in Denmark. Rich turned up and sort of blew everyone away with this announcement of Clojure. And obviously, it's been pretty successful since. We've had many projects in Thoughtworks that have used Clojure over the years. And I know there's a consultancy, JUXT, in London, that is very successful as a pure Clojure consultancy, actually, in the fintech, mainly in the fintech industry. What do you think? I mean, it'd be interesting to get your take on that. Do you think there are different domains where different languages are more or less suited?

Richard Feldman: Well, there certainly seems to be an element of suitability, but there also seems to be an element of just, sort of, cultural momentum. Like, something will get traction in a particular domain. Maybe it is, maybe it isn't, like, especially well-suited for it, but then it just sort of perpetuates. So, the example that comes immediately to mind is Rails and Ruby.

James Lewis: Right.

Richard Feldman: I mean, if you were to zoom out and say, aliens land, and they're gonna pick which of the programming languages are gonna become big in web development, I don't know why anyone would say, "Well, it's gonna be the one created by the Japanese guy that's only big in Japan right now, that's, the tagline is, 'Let's make programming fun.' That's what's gonna be used widely in industry, you know, and blow up in the next 10 years." I don't think anyone would have predicted that.

So, I don't think it's necessarily just about, you know, like, how well-suited it is, it's, like, the perfect fit, as much as it is, like, well, you know, one person, like DHH, made Rails, that resonated with a lot of people, and because Ruby was the language that he chose to make that in, yeah, he could have made it in Python. And he would probably say, "Nothing else but Ruby would have inspired me to make Rails." But I think you could pretty easily make the case that someone could have made something as successful as Rails in a different language.

James Lewis: The thing, you mentioned Python. That's super interesting. Because I remember when Rails was massively taking off. And in North America in particular, and in India, Rails became a huge thing. I mean, our founder at the time, was very taken with it, and some very persuasive people were talking about it. Obie Fernandez, for example. And it seemed that we suddenly had a load of projects in North America and India using Rails. And we still do. I think the world's largest Rails project was a Thoughtworks project in Atlantic City, maybe.

Richard Feldman: Really?

Recommended talk: Elm in Action • Richard Feldman & Thomas Anagrius • GOTO 2020

James Lewis: But the weird thing is, it didn't spread to the rest of the countries that we're present in. So, the UK was, still, remains still very much a Python shop. So, maybe it's not just a domain-specific application. There's also, like, a geographical thing going on 

Richard Feldman: Well, I think you could generalize that to culture. Certain pockets of culture might be geographic or might be just other things, that contribute to those things. I've spent an increasing amount of time over my career, like, learning about, like, why things get adopted and why they don't. And the more I learn about it, the more reasons I discover. And it seems like there's just an inordinate amount of variables. And as programmers, we like to look for simple solutions and simple explanations for things. But, much like with...I would say another true thing, where I, the more I get into it, the more variables I discover, has been, like, performance optimization, where, like, when I was in school, you know, all the focus was on big O notation, and, like, what's the asymptotic complexity of this algorithm and stuff like that.

And now I'm like, "That's, like, this much of it." That's the tip of an extremely large iceberg. Similarly, with adoption, languages or technologies in general, it's like, you know, I would have thought, you know, early on, it's like, "Oh, well, people use that because that's the best thing for it. What else is there?" And it's all these cultural, and, all these timing, factors that come into it.

James Lewis: I remember doing a thing with one of my colleagues about program language adoption. And certainly, it was as much about culture availability as anything else, it seemed. You know, are you better off picking something where you know there's a lot of people out there who you can just hire for it? Or, another example would be, a counter-example, you know, we had a publishing client who deliberately chose Scala because it meant that they could offer a fun, or more, potentially more, "fun" programming environment for developers to come in because they couldn't pay the same rates as the banks. So, there's this almost, like, trade-off is, do you offer this more interesting, exciting environment, versus, okay, just, yeah, whatever it is, thousand euros a day for just a standard job for the developer in a bank.

Richard Feldman: I think that's an underrated trade-off, is that if you're a company and you're considering a novel technology, and I've talked about this before in other settings, but this is something like, you know, at my previous job and my current job, we both have used Elm on the front end as, like, the entire front end, not just, like, a little part of it. Embracing that means that you get to be very selective about who you hire. We just filled a front-end role, and the recruiter was talking to me about, yeah, we, you know, hired this guy, and it was close. We had to decide between him and, like, you know, a couple of other people who wanted this role. Usually, it's the other way around, where employers are like, "I just wanna find anybody who can fit this description that meets our criteria, and it's really hard to find people." But you flip the script when you're offering a technology that people wanna use, but that not a lot of employers are using. And that's almost sort of like a, you know, self-fulfilling prophecy, in the sense that if enough employers do that, then it flips back around. But then, by then, it's mainstream, so then it's not notable anymore, and now you are saying, "Oh, well, I can just find lots of people to do it, because it's become mainstream." But I think most people are aware of that side of the dynamic, but they're not familiar with what happens before a language gets mainstream and what the dynamic's like over there.

James Lewis: I guess the big question is, did the person you hired have 25 years of Elm experience?

Richard Feldman: Well, let's see, Elm was created in 2011, so yes. Of course.

James Lewis: Which, I guess, when it comes to Elm, I'm gonna throw my hands up and profess to not generally being any good at all at front-end software development.

Richard Feldman: That's fine.

James Lewis: It's not something I've done in my career, to be honest. Any front end that I program tends to look like a bad implementation of Excel. That's pretty much it. 

Richard Feldman: Well, Excel is not an easy thing to implement, so, you know.

James Lewis: Well, funnily enough, coming back to Ruby, the creator of RSpec, Nicholas Nielsen, showed me an implementation of a spreadsheet written entirely in the constantly missing method.

Richard Feldman: Wow.

James Lewis: Because if you think about having a spreadsheet, you have to be addressing the, you know, by capital letter first. If you didn't have that defined as a..

Richard Feldman: Oh, that's hilarious. Wow.

James Lewis: And you could do all the calculations in constant missing.

Elm's Adoption and Niche in Front-end Development

James Lewis: Have you been surprised at the adoption of Elm and how successful it's been?

Richard Feldman: Yes, but maybe not in the way you might guess. So, I would have thought that it would have been sort of all or nothing. I would have thought that either a language like Elm would just take over the world, or it would just peter out into nonexistence, and people would, you know, walk away from it. Because I've seen that happen with various languages. Like, TypeScript would be an example of taking over the world. That's happening right now. And then CoffeeScript would be an example of something that sort of petered out and, you know, is not used anymore. Elm seems to have sort of, like, found a solid niche. There's just, like, a chunk of people who are like, "Yes, this is how I wanna do front-end development," but it doesn't seem like it's on track to take over the world. It seems like it's on track to be... Well, it already is, like, a self-sustaining thing, and it seems like it's on track to sustain. So, that's something that we've seen with a lot of backend languages. There are plenty of backend languages that are not, like, no language has taken over the whole backend. There's just, like, people have preferences on the backend world. Whereas on the front end, it's very much been, you can use any programming language you want, as long as it's a JavaScript dialect. Like, it could be JavaScript or it could be TypeScript or it could be CoffeeScript, all of which has the tagline, "It's just JavaScript," or implicitly in the case of JS itself.

All of the other ones have been, like, kind of niche players. But if you think about it, I mean, like, on the backend, it's really common to have a language that has, like, low market share, but is, like, quite a healthy, active community, with lots of people in it. It's just on the front end, that's, like, a weird thing to be. And Elm being a front-end-focused language, I just never guessed that. I thought it was, like, oh, it's either gonna take over or it's gonna peter out. I didn't expect it to become more like a backend language, in that it's just, yeah, there's a chunk of people who like to do it this way, and it's fine.

James Lewis: You mentioned TypeScript. So, that's the elephant in the room in some ways, right? So what would you ascribe to...can you see, sort of, any particular reasons that TypeScript has sort of eaten the world, or there's some discussion about it at the moment? They're both, on the surface, surface, fairly similar ideas.

Richard Feldman: Elm and TypeScript? Or...? Which two things?

James Lewis: Well, so, essentially taking something that's gonna be able to be used in the browser, but offers may be a safer, allegedly more productive perspective on programming front end.

Richard Feldman: I think, like, when I think about comparing Elm to JavaScript and TypeScript to JavaScript, and I guess also TypeScript to Elm, like, TypeScript and JavaScript, I mean, TypeScript is really like, "This is gonna feel like JavaScript, but with types." Elm is like, "I am a programming language, and I run in the browser." It has no relation to JavaScript other than, like, as a compilation target. So, you mentioned, like, Clojure earlier. I would liken Elm to Clojure, except, like, even more separated from the hosts. Like, Clojure is very much like, "I'm a programming language, but I intentionally have some Java-like elements inside," but I don't think anyone who's written Clojure and has written Java would say, like, "Oh, this is a Java dialect," you know? But they do, like, share data structures and things. Whereas Elm, it's even less than that. It's just kind of like, well, we use the same, like, string representation under the hood and stuff like that, but that's kind of about it. It's, like, this feels like a different programming language. Whereas TypeScript feels like this is a new take on JavaScript, I would say.

Recommended talk: TypeScript vs KotlinJS • Eamonn Boyle & Garth Gilmour • GOTO 2022

Scala Adoption Patterns and Paradigms

James Lewis: I guess that's maybe, it's a good comparison, I think, with Clojure as well, because if you look at something like two different JVM languages, like Clojure and Scala, say, I mean, most people's entry point into Scala was programming Java without semicolons. That was the old joke, wasn't it? And, whereas Clojure is a fundamentally different paradigm, a fundamentally different way of approaching writing code.

Richard Feldman: That's a good point. I've talked to people in the Scala community who talk about there being sort of three different ways that people do Scala. So, one is, like, Java++, or Java without semicolons, maybe. Another is, I want a hybrid OOFP language. I want a language that has a lot of OO support and a lot of FP support, and I'm gonna use them together. And I can't get that from Java, so Scala is the way to go. And then the third group is, I want Haskell, but my boss won't let me use it, so I'm gonna use Scala as my Haskell stand-in, and that's also a popular way of using it. But I don't see the same thing in Clojure or Elm. It's like, pretty much it's like, nobody's using Clojure as, like, Lispy Java. Everyone's using it as, like, Clojure. The same thing in Elm.

James Lewis: Would you say... Maybe a bit random, but so, I remember a few years ago, when Google first, sort of, publish is the wrong word, but created and then started talking about Dart, the programming language. We act that in...we have a thing called the Thoughtworks Technology Radar, which every six months, we sort of take new stuff and sort of think about it and assign it, like, an assess or trial or hold. And, at the time we sort of said, we put Dart on hold, on the basis that we were super worried that adoption was gonna be limited by the fact that other browsers weren't gonna jump on board, right? Because it was very much a Chrome...

Richard Feldman: With the VM part of it?

James Lewis: The VM part of it, yeah, yeah, yeah. And of course, that's now come back, right? I mean, it shows what we knew. Like, some years later, we now have Flutter, which is kind of, you know, very much being adopted quite rapidly at the moment. I kind of find that kind of interesting, where you've got something that sort of, at one point in time, wasn't the right time for it to be adopted, but then later on, it suddenly is the right time.

Dart's Evolution into Flutter and Killer App Adoption

Richard Feldman: Well, I think that's a great story. That's the...I mean, Dart, to me, fits the same category of adoption as Ruby, where it existed for quite a while like Ruby was just big in Japan for a while. Ruby was created to be, like, "Let's make a language..." I mean, Matz was like, "I wanna make a language that's fun to program." That was the word he used. I mean, Dart, as I understand it, was created, basically, because of the VM, because Lars Bock, you know, had done V8, and was frustrated by how difficult it was to do certain optimizations around JavaScript, and he was thinking, "If we just had a different language that felt a lot like JavaScript, but which was different in certain very specific ways, we could make a much more efficient VM implementation out of it," and that was kind of the motivation behind creating Dart. And, you know, if you think about it, why would people want to adopt that unless you're a VM author? It's like, okay, but I'm over here doing my web development job. What's the pitch to me? I don't, you know, care about how easy it is to optimize the VM or how optimized it can be. I just... Especially since, you know, you and your team, Lars, did such a good job making V8 a lot faster.

Recommended talk: Why Static Typing Came Back • Richard Feldman • GOTO 2022

James Lewis: Thank you, Eric, as well.

Richard Feldman: What's in it for me to switch from JavaScript or CoffeeScript, which was big at the time, to Dart? But then the answer comes with Flutter. And again, you could make the point, Flutter didn't have to be implemented in Dart, but it was, the same way that Rails didn't have to be implemented in Ruby, but it was. And that, I mean, if you look at what percentage of Dart usage in the industry is not Flutter, I would guess it's very small, similar to Ruby and Rails. I mean, it's, like, overwhelmingly Rails, it's overwhelmingly Flutter. So, the term I use for this is, like, this is, like, the killer app adoption explanation, is, like, there's some application of the language that's so popular that it just brings the language's popularity along for the ride because people wanna use that thing, and that thing is implemented in that language, and they want it so bad they'll use whatever language it happens to be implemented in.

Quick Intro to Roc: a New Programming Language

Choosing Rust for Roc Compiler

James Lewis: That's quite a nice segue for me to go to talk a little bit about Rust, maybe. Because you mentioned about your new language, Roc, that you're writing. We'll come on to that maybe in a minute.

Richard Feldman: Sure, sure, yeah.

James Lewis: But you mentioned the fact that the compiler is written in Rust, and that's another... I mean, I think, well, we are starting to see, in terms of Thoughtworks, and our clients, adoption in very specific areas, for Rust. Specifically, there's lots of interest, for example, in automotive, or, you know, sort of safety-critical systems and these kinds of things. What made you choose Rust yourself?

Richard Feldman: This is going to bring a little of my talk into this conversation.

James Lewis: No, that's cool. This'll be published a lot later, so...

Richard Feldman: Basically, it's important to me that the Roc compiler be very, very fast. I want it to run as fast as possible, and I certainly did not want to get to a point where I'd built this whole compiler out... I say "me," because that's what I was thinking at the time. Now it's a bunch of people working on it, and a lot of them are better at this stuff than I am. But, you know, I didn't wanna end up with a compiler that was very feature-complete and very done, and then we're like, "And we can't take any more performance out of it because of the language we've chosen, that is, like, garbage-collected and whatnot, and there's just this ceiling we cannot possibly exceed, no matter how many hours of performance we put into it, unless we rewrite it in, like, a Rust or a C or C++ or something."

And I thought, "I don't want that to happen. I want this to be as fast as it can be, and I don't wanna hit that ceiling." So, that meant one of a couple of different options. One was to do C or C++, which I'd had some really bad experiences with earlier in my life around, like, getting memory unsafety-related bugs that were painful to track down. And I was, like, well, the pitch of Rust is that you have the maximum memory ceiling, but, somehow, and I didn't really knot the time, somehow, they do compiler things to help you not run into those problems. And so I thought, "Well, that seems like kind of the only game in town that fits all my criteria. There's no performance ceiling, and yet I'm not going to get these memory unsafety bugs that are a nightmare to track down." So, I took the plunge, and I'd done a little, like, toy thing in Rust, or I'd built a command-line app that I'd never quite finished before. So I had a feeling for the language, and I was like, okay, I can get this. I can stumble through it. And now I feel very comfortable in Rust. But when I started, it was like, just because I had this list of criteria, and that was the one language that fit them all.

James Lewis: And you got to choose as well, which is the nice thing, right?

Richard Feldman: Yes. Very important. 

James Lewis: I remember my colleague, Erik Doernenburg, he's based in Germany, he's a head of tech at the moment in Germany. And he did a great talk at one of these events on Rust. And it was back at the time when not that many people were adopting it. So, it was quite early on. [inaudible 00:17:54] and it was a bit of an overview on why Rust, and, again, actually why some of the other languages that have started to appear around, which, you know, like Go and, oh God, I always forget the Apple one, Swift, is it?

Recommended talk: The Ideal Programming Language • Richard Feldman & Erik Doernenburg

Richard Feldman: Yes, Swift.

James Lewis: And why they, you know, what problems they were attempting to solve, you know, which is around memory safety. It's something like, I can't remember the exact number, but some very high proportion of bugs...

Richard Feldman: Seventy percent of CVEs, yeah.

James Lewis: ...at Microsoft. There you go, right? So, yeah, I mean, this was around that. But he did this lovely little thing at the end of it, where I think he...he didn't...it wasn't Conway's, it wasn't Game of Life, but it was a similar kind of agent-based kind of implementation. And he always uses that when he's learning a new language, right? You need something, you know, you need some framework to understand when you're learning a new language. And he started running it, and he was sort of running multiple iterations of it, and he was looking at the performance. He was like, this is a lot, lot faster than the, I think it was a JavaScript implementation, ridiculously faster, orders of magnitude faster. But he thought, "Actually, I thought it'd be better than this." And he realized he hadn't turned off, and I'm gonna get this wrong, but he hadn't turned off the, there's some kind of, like, setting in Rust, I think, which you can turn off. It's, like, "production mode" versus...does that make sense?

Richard Feldman: Oh, yes. This is an optimization flag, yeah.

James Lewis: Right. And he'd forgotten to use that. And then suddenly it was, like, three orders and four orders of magnitude faster. Which I quite like as an idea, yeah.

Richard Feldman: That sounds about right. That flag makes a big difference.

James Lewis: Maybe let's generalize it. There's Rust in particular, which is spiky, I'm told. I've only read some books. I've not made any serious attempt to learn it. But I'm informed it's quite spiky. There's quite an adoption curve. How do you go about adopting or learning new languages?

Richard Feldman: Yeah, so....

James Lewis: Or do you know enough now that you just go, "Oh, it's that sort of thing?"

Richard Feldman: Well, it's funny, because you mentioned, like... I know a lot of people who like to do the same, like, "I'm gonna learn a new language, I'm going to pick a project, like Game of Life, that I'm very familiar with, and implement that in the new language." I'm almost the opposite, where I always need to have some specific project in mind first, where I'm like, "I wanna build this in this language," or this, whatever the new technology is, and then that motivates me to push through whatever the learning curve is, because I'm like, "Well, I can't get it any other way, so I gotta do it." Whereas, so, I guess maybe I don't tend to just seek to learn languages just for the sake of learning them. It's more like, there's some problem I wanna solve. This seems like the right tool. All right, let's go. So, I don't think I've ever done the...

James Lewis: And picking the easy things, right? I'll just write a compiler.

Richard Feldman: I'll just write a compiler, which I'd never done before either. But, I guess, I don't know, at least for me, the hard part of learning something new is generally sort of finding the motivation to climb over these obstacles that I hit, whatever they might be. And I also am aware of, there's an element of, if you pick a project that's too hard and a language that's too hard, and, you know, like, those can kind of compound, for sure. But I had previously done this little command line app in Rust, where, actually, the motivation there was, it was the Elm test runner, and now there actually is, somebody else separately went off and, like, did a different Elm test runner implementation in Rust. But at that point, it was mostly just frustration with Node.js APIs, which is what the one I'd written previously was in. And, I one day...

James Lewis: Not because Node is blazingly fast.

Richard Feldman: No, it was nothing to do with performance. I wanna write this in something that has a different set of APIs, shall we say. And I didn't really wanna use Go, because I didn't have any particular interest in Go. And I was like, "Well, I wanna learn Rust, and I want to have a codebase that I can maintain that's not Node.js anymore. So I'm just gonna rage rewrite it in Rust." And I got, like, 70% of the way through that, and I was like, okay, I have a feel for this language now, and it, you know, feels...I'm not great at it, but I at least can stumble my way through doing things. And I have this code base that, as happens with many projects at around the 70% mark, I was sort of, like, okay, yeah, but do I really wanna the rest of the work to get this over the finish line, and then maintain that codebase and then new contributors are not gonna know what they're doing, and so on, so I ended up kind of putting it on the shelf and not finishing it. But somebody else separately went and did it.

I definitely would agree with...the learning curve on Rust is a downside. It's quite high, and it's also, it's not...like, some languages, I think, have a high learning curve because, like Haskell, for example. Haskell, I would say, has a high learning curve, in part because of a lot of the things, you're encountering for the first time. I've never heard of these concepts before, I don't know what they're about, and there's just kind of a lot of stuff to learn. In Rust, I would say the thing that's the hardest about the learning curve...and people often talk about "fighting the borrower checker." So, the borrow checker is kind of Rust's, like, marquee feature. It's what sets it apart from other languages. It's what gives you the memory safety. But, at the same time, it's not so much, like, you can just sit down and, like, once you wrap your head around the borrow checker, you got it, and it clicks. It's more like there's just a whole lot of things that all fall under the umbrella of borrow checker, but there are various scenarios.

And I remember one time, it took me, I'm embarrassed to say, like, I think it was, like, two months or something, where I couldn't figure out how this, part of the compiler was blocked, and I couldn't figure out how to do the thing I wanted to do. And, you know, the borrower checker gave me an error, and said, "You can't do this." And I was like, why not? I know this is possible. If this were in, like, C or something, I would just be like, "Here, take this thing and put it over there. Put it on this thread." And it was like, "No, you can't do that." And I was like, "Well, why not? Why can't I do that?" And I eventually realized, I was like, wait a minute, do I just need to use, it was IterMut versus Iter. And what Iter...the difference is, iterating over these, Iter is, like, I wanna iterate through these things, and IterMut is, I wanna iterate with the possibility of mutating them. But it didn't occur to me to use IterMut because I didn't wanna mutate them, at all. But the problem was I needed to use IterMut to prove to the borrower checker that I had permission to mutate it, which meant that it was safe to put it on a thread. So, in this case, mutable was sort of a stand-in for, "is uniquely owned by this particular instance." And I switched it to Iter and IterMut, and this thing that I had been stuck on for, like, two months, it was like, "Okay," right? And…

Recommended talk: Rust in Action • Tim McNamara & Richard Feldman • GOTO 2023

James Lewis: I would love to have been in the room at the time, it was like, oh my God.

Richard Feldman: But I bring this up as an analogy of even though I had that, I already knew the mental model of what mutable means "is uniquely owned, and therefore has permission to do certain things," it hadn't occurred to me that, I didn't, like, put two and two together with the implications of that, that, like, oh, if I want to put these things on threads, I need to IterMut, even though I'm not gonna mutate them. So, it's just a lot of stuff like that.

James Lewis: It's almost like you're being more restrictive than you need to, in some senses, right? But because the mental model is, okay, this is a restrictive memory model, so I wanna be overly restrictive 

Richard Feldman: And I think, in this case, it was more of a language terminology thing, in the sense that, I think if instead of calling it IterMut, they called it, you know, IterUnique... I'm not saying that they should rename it. It's more just, like, if they called it that, I think I would have more quickly realized, like, "Oh, yeah. To hand these things out to the threads, they have to be unique because the whole point is I don't want them to be shared across the threads." That's, like, another aspect of Rust that makes it tricky, is that part of what the borrower checker does has to do with when things are, like, available in memory, like, when they're in, the lifetimes of, like, when they're alive and when they're, you know, can be reclaimed.

Also about mutation access, like, this thing can or cannot mutate it. Also, multi-threading and, like, which things have permission to mutate things, which has to do with, like, preventing data races, in addition to memory safety. So, there's just a lot of different things that all kind of come together, and put it all together, you get a big learning curve.

Motivation for Roc and Tagline Inspiration

James Lewis: You've spent a lot of time building this compiler. But the aim of it, presumably, is to compile this new language. So, maybe you can talk a bit about Roc, and what makes it unique...

Richard Feldman: Yes.

James Lewis: ...and why? Why do you decide to write a new language?

Richard Feldman: For Roc, the tagline is "Fast, friendly, functional." And I was just talking to Dave Thomas, and he mentioned that he knows someone who made another language. I think it was, was it K maybe?

James Lewis: Yes.

Richard Feldman:, the tagline was "Fast, fun, functional," which I did not know existed, but it's very, very close to what I independently came up with. But the basic idea is, I really wanted a language that felt like Elm in terms of the ergonomics and the overall user experience, but which is for, instead of being focused on browser-based UIs, which is sort of Elm's bread and butter, I wanted it for sort of, not just, like, one other domain, but sort of, like, the long tail of domains. So, I'm not just thinking about, like, servers and commandline apps, although those are the two things that people are most interested in it for. Or desktop GUI applications, which I'm also interested in. But also things like...

James Lewis: If you can replace Electron, the world will be a happier place..

Richard Feldman: Well, that's a very big challenge, right? It's not an easy thing. There's a reason Electron's so popular. But definitely, I've always run into these little cases, where it would be, and Vim script is gonna be the one that comes first to mind. I wanna write a Vim plugin. I don't wanna learn Vim script. I don't wanna use Vim script. I've heard, you know, it doesn't have a good reputation as a language. But what I wanna use is I wanna have, like, an Elm-like experience, this really pleasant experience I've had with Elm. But Elm, being a focused language, is not ever gonna get into that. There's never gonna be an Elm for Vim script. So I wanted to make a language that was capable of being used in lots of different domains, while still feeling like it was, to some extent, domain-focused, like how Elm is.

Roc's Architecture, Design Principles & Innovation

Richard Feldman:So, without getting too much into how we achieve that, there's this basic, high-level concept of platforms and applications. So, what we mean by that, an application is basically just, like, you know, my project. I'm building a thing. A platform is something like a framework, in the sense that it's sort of the foundation that you build on. You never have more than one platform. You always have one. But unlike most languages, in Roc, you have to pick a platform. There's no such thing as, like, a platformless Roc application, or, like a, you know, framework list, if you will.

And the reason for that is that platforms, although they kind of feel like frameworks, they're scoped differently. So, a framework, typically, like, let's use Rails for example, Rails will be in charge of things like database access, and how do you do routing, and, like, request handling and stuff like that. In Roc, sure, that would be true too, but also, it's gonna be in charge of all of your low-level IO primitives. So, it's gonna say, here is all the things you can do, in terms of HTTP and, you know, database access and this and that. And for a web server, maybe you have, like, the full range, but you probably don't have, like, reading from standard in on a web server. Does that make sense?

James Lewis: Yes It does. 

Richard Feldman: Maybe you leave that one out. Now, a better example, though, is let's say that you wanna make a platform for, like, a database extension. When you're writing a Postgres extension, do you even wanna, like, have network access? Do you wanna have arbitrary file system access? Does that make sense? So, the way most languages do this is the standard library has all these really low-level IO primitives, and then there's certain use cases where it's like, eh, don't do that. Don't write to that. But a problem this creates in the ecosystem, for these, sort of, long tail of use cases, is that you use a library, and that library is like, "Oh, I can just, like, create a temp dir, and put stuff in there, right? And it's like, I don't know if I want you doing that on my database server, you know? And so, the idea is that, by basically making it so that you have to pick a platform, the platform says which primitives are available, the ecosystem will sort of naturally design itself to be accommodating to that, and to be aware of that, and to be like, "Oh, if I choose to, you know, use a temp dir or whatever, that's gonna restrict which platforms I can potentially run on." If I read from standard in, that's gonna restrict which platforms I can run on.

Another thing is that the platforms, because they're in charge of the IO primitives, they can implement certain, like, sandboxing features. So, one example of something that I'd be...I hope someone builds in Roc, because they now can, which I would love to use, as a sort of, a sandbox script runner. So, for example, like, this is something that Deno has at the language level, but in Roc you can just, anyone can implement it in user space, which is basically, like, you know, if I download a script from the internet, and I run it, I know it might mess up my machine. Like, it might give me a virus, it might write to places on my disk that I didn't want it to write to. But because in Roc you have this platform-application split, if I have a platform that's like, "I'm a commandline runner, but I'm a sandbox commandline runner," and because I'm in charge of every single one of the IO primitives, I can say, "Yeah, look. I give you access to all the IO primitives, but guess what? If you try to write to this part of the file system, or you try to read from there, I'm gonna prompt the user, and there's nothing you can do about it." So it's now as safe as a web browser, in terms of, you know...

James Lewis: That's very interesting. 

Richard Feldman: But at the command line. And I would love to have that, because I run stuff that I download from the internet all the time, and I'm either doing it in a VM, right?

James Lewis: You heard it here first, folks. You shouldn't run stuff you download from...

Richard Feldman: Yeah, well? And we all do, right? And I would love to have something where I just had this confidence that I don't need to audit the whole thing. I just need to look what platform are you using? Okay, it's the sandbox one. Great. Done.

James Lewis: I think this is a really interesting idea, because, I mean, I've only sort of come across this maybe a couple of times before, but it seems to have...people aren't talking about it. It seems now, but five years ago, there were lots of people talking about unikernels, for a different reason. This was about security, and about the kind of, you know, the different, yeah, the surface, the attack surface area, essentially. Can we limit the amount of stuff we're gonna compile into our OS so that it's not available? You can't even use any of it. It's just not there. And I think I had a line at one point that Docker, 30% of the way to unikernels. It's like, you know what I mean? That was five years ago. I talk about it, but it seems like, in some ways, a similar idea, but coming at it from a different perspective.

Richard Feldman: It's definitely about, I mean, I would say the thing that you have in common there is the idea of security through just, like, absolutely not making things available in the first place, rather than having them be available and trying to make sure you played Whac-a-Mole and locked everything down, right? Just saying, like, it's not even there by default, and we are only gonna opt into giving you access to the minimal set of things necessary to do whatever you need to do.

James Lewis: Cool. And what sort of language is it? Is it a purely functional language? You said it's functional? 

Richard Feldman: It's functional, and I would say, like Elm, there's a very heavy focus on usability and user friendliness and stuff like that. There's different sort of schools of thought of, like, functional programming languages. So, I would say that, like, Haskell is very focused on, like, mathematics, or at least, like, it culturally feels that way. Maybe different people would disagree with that, but... And I would say, like, Clojure is a very, like, you know, it's all about Lisp, and, like, macros and, like, these particular set of primitives that are not necessarily required for functional programming, but, like, fit together in interesting ways with functional programming. And, like, Elm and Roc are very much, like, typed, purely functional, very focused on, like, having a small set of simple language primitives that work well together, and then nice compiler error messages and ergonomics and stuff like that.

I would say we're, on the tooling side, we're drawing a lot of inspiration from Go, where we're like, we have the test runner built in, we have the formatter built in. We wanna make it so, you know, you download the Roc binary, and then you can just go. You don't need to, you know, pick a bunch of things off the shelf, you know, to get things that... Everybody agrees you should have a testing system, but you don't need to go pick one off the shelf. It's like, it's there. It's right there, built in.

James Lewis: And have you taken the same tooling of Go? I mean, have you taken the same decisions around things like testing with...or is it Rust? With Rust you, test inline. You have the test in the same file 

Richard Feldman: You can do that, yeah. Yeah. So, we do have inline tests. You just, it's, the keyword is called "expect," so you can just, like, write your function, right below it, next line, expect whatever, and then you're done.

James Lewis: Ah, super cool.

Richard Feldman: Actually, I guess a nice example of ergonomics. This is always something I'd liked. Power Assert is the one that comes to mind, that I use, and I also, back in the day, I did a little bit of development with Groovy, and they had that built into their tester, and I always thought it was cool, is that when you run your tests in Roc, you can just write normal booleans. Like, you don't need to do, like, assert this or that. You just say, like, you know, expect X equals, you know, for X equals five, and that's it. And what it'll do is, if that test fails, is it'll show, first of all, it'll print out the source code of the actual test that you wrote, and then also, any named variables that you had, it'll just tell you what their values were. So you don't have to go back and be like, "Oh, wait. What was this and that?" Just trying to give you...and we've also talked about maybe expanding that a little bit to tell you, like, what's on either side of the equals, or if you had, like, a less-than, you know, show you those things, because you might wanna know. Just try to give you the info that you want anyway, and don't make you go back, and, like, debug log the test.

James Lewis: Yes, cool.

Richard Feldman: That's the first thing you usually do anyway, right? So might as well save you the trouble.

James Lewis: I was always of the opinion... I don't write as much code as I used to anymore, has to be said, but I was always of the opinion if you use the debugger, you're failing somehow, but it was a...I come from a very, sort of, purist TDD kind of background, if you like, so…

Recommended talk: Learning Test-Driven Development • Saleem Siddiqui & Dave Farley • GOTO 2022

Richard Feldman: Well, but regressions still happen.

James Lewis: Yes, of course.

Richard Feldman: Yes.

James Lewis: So, is it out? Is Roc out now?

Richard Feldman: I would say it's pre-release. So, we don't have a numbered version yet. You can download a nightly release. We're in the process of making a real website right now. Depending on when you watch this, maybe it'll be out. But right now, it, like, as of this exact moment, there's kind of a placeholder website, that sort of describes the language, but it's very bare-bones. But now we've gotten to the point where it's useful for things. So, before this point, like, last year, I would say, like, "Well, you can try it out and play around with it, but it's not, you know, really that useful," but now it is useful. I would say it's useful, but very immature and early, and there are bugs and stuff like that, but you can, like, build stuff with it for real now. And now that we're at that point, we're like, "Okay, now we need a real website, and," you know, so it's ready to be used by early adopters who aren't afraid to sort of roll up their sleeves with a new technology. But, like, I have a lot of fondness for my time at the beginning of Elm, because, on the one hand, when you have a small set of people using the technology, yes, there's sharp edges and bugs and stuff, and the ecosystem's not there yet. But on the other hand, you know, I used to work with Bill Venners, who made ScalaTest, and I remember thinking, "How could you have made something that's used by so many people?" and I asked him about that, and he's like, "Oh, that's very easy. Back then, there was no testing thing, so I made one." And that's how it is in the early stages of a language. Somebody's gotta be the first person to write whatever X is for that particular, you know, use case.

James Lewis: My career goes back through, you know, before, my programming career goes back through before Java, essentially, and that sort of completely changed my life, right? So, when Java came out, and the internet, essentially, well, the World Wide Web, and Java really sort of changed pretty much the way, I think, many programmers went about their job. But the interesting thing with that, and especially in Thoughtworks, is everything was a first. You know, everything you were doing was a first, in a lot of ways. The kind of, the testing frameworks were a first. The continuous integration servers were a first. The, you know, acceptance testing frameworks, like Selenium and these, they were a first. All these sort of things were, the innovations that were happening were because people were facing, were hitting these issues, and then kind of trying to come up with a way of solving a problem that they were experiencing on a day-to-day basis. I do sort of wonder now, are we still seeing that, or are all these sort of solved problems now? It's just when we have a new thing, new language, say, like Roc, we need to create the test runner for them, you know, and there's someone who's gonna be the first person to do that. There's someone who's gonna be the first person to do X, rather than it being...

Or, another example would be things like machine learning, you know, applying engineering discipline to machine learning. So, you know, there was a period, not so long ago, where the idea that you might version control your model was, like, a crazy idea. Why would you think about doing... But that's now a kind of normal thing, so things are repeatable and so on. Is this a case of, sort of, we're applying, I guess, a set of tested and known patterns to the new things? Is that a kind of…

Parsing Strategies and Experience in Roc vs Other Languages

Richard Feldman: I'd say it's a mix. So, an example that comes to mind is, so, in Roc, we have a, as far as I know, unique, I don't know of any other language that does it this way, approach to serialization and deserialization. So, two different ways that this is, like, commonly done today... So, there's, like, the JavaScript way, the Ruby way, where you get some JSON in, and you just say, like, JSON.parse(), and it's like, cool. Now you have a JavaScript object. And of course, the downside of this is, you know, you get partway through your program...

James Lewis: Cool, now you have a JavaScript way

Richard Feldman: What if the JSON doesn't match what you thought it was gonna match? You're gonna find out about that eventually, but it might be pretty distant from where that original problem happened. So, that's one way of doing things. Another way of doing things is, I'm thinking of Rust, but, I mean, I know in Java, you can do it the same way, where you have a schema up front, and you say... So, this would be, like, Jackson in Java. So, you say, "Here is exactly what I expect it to look like, and, you know, come and parse the JSON, and if it doesn't match that, fail right away, right there."

So, that, in terms of, you know, how easy it is to debug later, I would say that's easier to debug later. But a downside of that is that you do need to actually write out the whole schema, and, you know, sort of keep it in sync with your program, and so forth. So, something we've introduced in Roc, that as far as I know is novel, is that we kind of have both. So, you can write at the same time... So, you write the equivalent of, like, JSON.parse(), and it does just, you don't have to write a schema, but what it does is it uses type inference to infer the type that you're parsing into, and based on how it's using the rest of the program. And so it actually will decode it right there at the call site, and if it doesn't match how you're going to be using it throughout the rest of the program, it fails right away.

James Lewis: That's super interesting.

Richard Feldman: Yeah. Now, what's interesting about that is that that's not specific to JSON. It's something that's just, like, we call it, you know, "decoding" is the general term for it. So, in order to make it work for, let's say, JSON, somebody needs to write a particular, like, JSON-aware parser, that works with this framework, so that it can, you know, translate between JSON and Roc values. So, on the one hand, you could look at that and say, "Well, this is just somebody needs to write a JSON parser for Roc." But on the other hand, structurally, it's different from how it's done in other languages. It's not like you're just translating it into a normal JavaScript object.

James Lewis: Is there a TypeScript library called io-ts or something like that?

Richard Feldman: I've heard of this, yeah. I believe that that works like it works in Java, and in Elm and Rust, where you do make a schema, and, you know, somehow you define, in code, like, you write some code that, you know, does this. I assume, I don't know for sure, but I assume that you either write it by hand or you run some code that generates it or something like that. But as far as I know, in TypeScript, it's either you do that, or else you'd just say JSON.parse(), and, you know, that part's just not type-checked.

James Lewis: Yes, right.

Richard Feldman: But, yeah. But the point being, like, you know, if you're writing this, it's like, you're doing it in a different way than has been done before. But on the other hand, it is still just, you know, for JSON, for XML, for CSV, whatever.

James Lewis: It's good. We're talking about functional programming languages, and we've finally got to the point where something's a bit monad-like. Which is good, right? Because that is interesting, right? That's why I found it interesting about TypeScript, and you're parsing stuff over the wire, and you've got this lovely type safety within the environment you're working in, which is the front end. But, as you say, like, you could be sent garbage that is essentially, you've got no way of knowing until you try and parse it, decode it, whatever. So, I kind of like the idea that actually there's maybe an attempt to solve some of those problems, where you're actually being type safe across the entire, I guess, back, front end, etc. And across the wire. And one thing I do... I did a lot of integration, a lot of XML parsing in my day. And, you know, we used to use XML. We used to... What was it called? XPath, that was the thing.

Richard Feldman: Oh, yes. I remember that.

James Lewis: Where, rather than do the, kind of, like, take the schema, basically have a client that's generated from the schema, and you kind of, you know, when you receive a message, you turn that into the object, and if it doesn't match the schema, you blow up. You'd say, instead of that, you'd use XPath to just pick out, and Schematron, actually, was the thing, you pick out just the bits from the message that you wanted, and therefore, you would know if...you were insulated from changes to the schema, if you like. So, you know, if someone changed the schema, you weren't just suddenly gonna blow up. Because this is the main problem, right? I mean, how do you avoid that issue, of, essentially just falling over in a heap if the thing that turns up isn't what you were expecting? So, if it doesn't conform to...If you can't decode it, right? Do you just blow up, and just, like, sorry, we're done?

Richard Feldman: Well, the default is, I mean, it's not, like, throwing an exception, it's just, like, you get back a value that says either it succeeded, and here's your answer, or it failed, and then here's, you know, the error that it failed with, such as, like, you know, this field is missing or something like that. So, recovery is sort of up to you as the application author. It's not, you know... I don't think there's a one-size-fits-all way to recover from data being missing.

James Lewis: Which, it's the compile time versus runtime checking of these things, right? So, that's what we used to do. We used to do it at build time. So, we'll generate a library based off a schema, and then that library's gonna be quite fragile in the face of changes elsewhere, if you like, and you'd have to recompile your application if someone's schema changed somewhere, which is, like, suck.

Richard Feldman: Now, having said that, if you want to write something that is more flexible at runtime, like, you can say, well, it's okay, if this field is missing, I wanna default to this or that. You can do that, but then at that point, you need to, at least in Roc's case, you would need to sort of, I'm gonna use the term eject, you know, like, translate the automatic thing that's happening into, like, an actual, like, written-out schema, like a decoder that you can then customize. So, this is how we do it in Elm, is, like, it's always done that way, which makes it very easy to customize. Another nice thing about that is, if you have it all written out, that it means that if you wanna change your variable names or something like that, you can do that without worrying that you're accidentally causing a regression in the decoding, which, you know, hopefully, a test catches, but it might not.

But then again, there's another trade-off there, which is that when you have it all written out, it becomes a little bit more brittle to internal changes. Like, so if I need to, like, you know, add a field somewhere that happens to be in a data structure that's used quite often throughout this thing, I have to go through and change it in a bunch of different places. And so, certain things, like being synchronized, either can be a source of bugs or can be a source of convenience, and it's just an innate trade-off. But yeah, if you do sort of eject the decoder, and have it all written out, then you can be a lot more flexible in terms of, if the runtime value is this or that, or this is missing but that's not, or, you know, I can say, well, I'll accept any of these three names here, and I'll just internally convert them to the same thing. So, a lot more flexibility if you go that route.

Exciting Features in Roc and Performance Optimization

James Lewis: I feel like we've gone quite deep into some random part of the language, which is, like, parsing responses. But let's maybe chunk it up a bit. So, what are you excited about in terms of features?

Richard Feldman: For Roc?

James Lewis: For Roc, yeah.

Richard Feldman: Great question. I mean, that is, to be fair, one of the things I'm excited about. So, in general, like, it's 100% type inference, so you can, you know, you don't need to write any type annotations if you don't want to. I mentioned that, like, you know, it's fast, friendly, functional. So, in terms of fast, the thing that I'm excited about, there are two parts to that, one is really fast compile time. So, we've spent a lot of time doing that. We still have a number of projects to go, but one of the things that, I mean, you mentioned, like, TDD earlier, one of my hypotheses for why there's a really strong testing culture in Ruby, like, for example, and I think in Python also, more so than I've seen in, like, type-checked languages, I think part of the reason for that is that you get a really fast feedback loop when you have a dynamic language for two reasons. One is that there's no compile step. So, we wanna just make our compiler so fast that you don't notice it. But the other part of that is that, from a workflows perspective, if I am writing a test in Ruby, or let's say I've got a bunch of tests, and I'm refactoring something, all my tests go red, because, you know, I've changed this thing. Okay, fine. Well, I can go and fix them one at a time. I can go, like, change my implementation, fix whatever, and then they go green one at a time. Now, in a type-checked language, the norm, today, is that I make my changes and I get a bunch of type errors, and all of my tests are not runnable anymore, until I fix every single one of the type errors. So, the whole, like, make the tests green one at a time by fixing implementation details, that workflow is inaccessible until you've fixed every single one of the type errors.

But quite often, I don't wanna do that. I wanna go through and, like, you know, change the behavior one at a time, and make sure that the new behavior actually passes all the tests. And then maybe there's still some leftover type errors because I changed the interface, but those are just going through and updating, you know, callers to do the new thing. In isolation, I still just wanna just do this thing. Or, let's say I'm trying something out because I think the new implementation will have better performance, or I'm trying something out and I just wanna see how it feels to use it. Again, I don't wanna have to go and fix every single implication of that. So, this gets me to another thing that I'm excited about, which is that we've designed the compiler, it doesn't 100% work this way yet, but we've at least designed it, and, you know, will get to a point where this does work this way, where, the compiler always type checks your code, and always tells you about problems, but they don't block you. So you can still run even, if it's got type errors or naming errors or whatever. So, the idea is that, much like a dynamic language, you still have those workflows available.

So, I wanna get that same experience. This is always something I missed when going from dynamic to statically-typed languages, is that workflow of, like, I can always run my tests, no matter what's going on. And I can see which ones fail, and, you know, if they have a type mismatch, fine, that's a failure. Failed test. But only if that affects that test. If the type mismatch is some distant part of the codebase, I don't wanna see that. Don't block me. Just let me run these tests, and I'll come back to that later. So, that requires sort of building the whole compiler with that in mind. And when I say it's not ready yet, it's because there's stuff that has bugs that we need to fix, but really, I'm really excited to use that, like, when I get to a bigger Roc code base.

James Lewis: Does sound like a really interesting feature, a cool feature, yeah.

Richard Feldman: Because I like both. I like having the workflow where, you know, tell me about the type errors up front. And I also like the workflow where, you know, sometimes I just wanna run the thing and see what the answers are.

James Lewis: It's become quite common, or certainly quite like using, you know, kind of, basically monitoring the file system for changes, and running your tests every time there's a file system change, which kind of blows that completely out of the water, right? If suddenly a type error is gonna stop everything...

Richard Feldman: Right. It's like, "Oh, they all fail. "

James Lewis: Yeah. Everything's gone, right? 

Richard Feldman: It's zero successes, 100% failure. 

James Lewis: No, that's actually really cool. 

Richard Feldman: So, one more that, this is in the design stage, but it's, again, something that we've designed the language around, and the platforms and applications, and, also, the fast runtime performance is a big part of this. But something that I really want to exist in the world, and we're gonna make it happen, is... So, package ecosystems, I think, are one of the, after, like, garbage collection, have been, like, one of the biggest, like, levers for making programming, like, a lot easier, and making people more productive. And when you get a package, like, I install a new package, I always get the code, and then I get the documentation. And then sometimes, occasionally, there might be, if it's, like, a really popular, widely-used package, I might get, separately from all that, some editor tooling for my particular editor. So, you get like a... I think, like, the React community has done some cool stuff with this. So, like, I remember, like, the Redux dev tools. I know Redux is not, like...it's falling out of favor, but I do remember, like, oh, people built tooling for that, and, like, but it didn't run in people's editors. It ran in the browser. And I think that was, like, kind of a hint of, like, hey, our packages and stuff, we could be a lot more productive with them if we had tooling for them. But, in a lot of cases, people don't integrate them into editors because it's like, well, what, I'm gonna write...you like VS Code, and this person likes IntelliJ, and this person likes Emacs, and this person likes Vim. Not gonna write, you know, 10 different implementations of this.

And so, what ends up happening is that you get zero implementations. People just don't bother doing it at all. So, what we wanna do is we wanna solve this at, like, the language level. And my specific, concrete goal is to make it as easy to write editor tooling as it is to write a function in Roc. Like, you can just write a function and press enter, go down a line, and write, like, a piece of editor tooling right there, and that gets distributed with packages. So, it's just part of the language, and when someone implements, like, the VS Code, you know, Roc extension, part of what they do is they implement a way to handle these things because we have sort of a...it has to be kind of a simple vocabulary for this. I realize that, of course, we want these to be accessible, and if you're thinking with accessibility in mind, you already have to have the language for describing these tools be pretty general, so that it can either be rendered on the screen or rendered for a screen reader or something like that. And at that point, you can sort of adapt that to whatever primitives, like, Vim has different primitives than VS Code, which has different primitives than IntelliJ.

But if you're describing the functionality that you want at a sufficiently high level, the hope is that I can, you know, we talked earlier about, like, exporting JSON, or ejecting it, right? I would love for the JSON package that I installed just to add, like, something to my context menu where I can just say, hey, see this, like, you know, type inference-based JSON decoder? I wanna just right-click on that, and say "Extract explicit decoder." It's like, [vocalization 00:51:30] here it is. Right? And then, that works in Vim and it works in Emacs and it works in this. And it's like, nobody needed to write a separate plugin for each of those. It's just, that the Roc extension means that when the author of that JSON package shipped, they included that little bit of functionality, and now everybody gets that, and if they wanna do the customized version, they can do that. It's trivial. That's exactly the type of thing that I think could make the Roc ecosystem just do unprecedented things, where you have this, like, everybody can not only ship the code but also this tooling that's like a force multiplier for everybody else. And then everybody's a multiplier for everybody else. That can be a compounding effect that I think would be powerful. And that's one of the things I'm most excited about with the language. I could go on, but...

James Lewis: No, no, no that's very, very cool. It reminds me of the...I think there's something called the principle of least surprise, right? The reason I love certain tooling over other tooling is because it just lets me...I can almost guess how to achieve a particular thing. "Oh, I need to extract a method. I wonder if I did that... Oh, cool, it works," you know? But I like this...and it's sort of like, tooling that's designed that way, I think is incredibly powerful, because it just, as you say, it acts as a force multiplier. So, the idea of building that actually into a language tool, I think that's super interesting. Yeah. Maybe we should chunk up again. What else are you excited about what's going on?

Richard Feldman: In Roc, or elsewhere?

James Lewis: Elsewhere. Just in general. Maybe one or two things.

Richard Feldman: I'm very excited right now about learning more about performance optimization, on a personal level. So, this was something where, like, when I was in, as you would say, university, we in Americas, we always say college for some reason, even though it says university in the title, whatever.

James Lewis: That's what...bizarrely, I was actually in a college, but it was a university..

Richard Feldman: I don't even know what the formal difference is, to be honest. We use them kind of interchangeably. But when I learned about performance optimization there, it was a very heavy focus on asymptotic complexity, and, like, you know, as N gets bigger, the number of elements gets bigger, what does the behavior look like? And, sort of, the stated reason for doing all that was, well, this is some knowledge you can have that translates across hardware, because different CPUs have different optimizations and yada, yada, and that was sort of hand-waved away. And the more that I've gotten into, because like I said, I'm trying to make Roc's compiler fast, the more that I've learned about, okay, but, you wanna make it run fast on, you know, particular hardware, like, you know, the modern Apple laptops, or, you know, Intel servers or whatever, there's a particular set of, like, techniques that you use, that do need to know about the hardware. And there's, like, as I have more and more come to learn, like, the stuff that we learned about the asymptotic complexity is just the tip of the iceberg. And if you really wanna get stuff going fast, it's like, now learning about, like, CPU memory caches, and TLBs, and, like, you know, virtual memory, and paging, and SIMD, and...

There's this great talk by Andrew Kelley, who made Zig, a couple years ago, where he talks...and it was at Handmade Seattle, I forget what year, it was, like, 2021 maybe, something like that. Maybe it was pre-pandemic. Might've been 2019. But he talks about how he made Zig's compiler a lot faster. And he's not talking about any of that, like, asymptotic complexity stuff. He's talking about, like, you know, here's the memory management techniques, the strategies that we used, and data-oriented design, and structure of arrays and, like, "We're trying to avoid CPU cache misses. That's the name of the game," and all these things. It's this whole world that I had a... I didn't realize how superficial my understanding of it was. And it's been really exciting to, like, get into it and be like, "Wow, I can make things so much faster than I realized."

James Lewis: Are you familiar with Martin Thompson's work on this? He talks about the idea of mechanical sympathy, for many years now.

Recommended talk: Mythbusting Modern Hardware to Gain 'Mechanical Sympathy' • Martin Thompson • GOTO 2012

Richard Feldman: No. I've heard that term, but...

James Lewis: Mechanical sympathy, which is this idea of being sympathetic to the hardware, right. But what I find fascinating is, because we moved away, often there were so many levels of abstraction between the code that you're writing and the hardware it's running on these days, that actually, that sort of, I wouldn't say fall out of fashion, but people, I don't think people think enough about it, certainly. But of course, where you wanna think about it is in the compiler, right?

Richard Feldman: Yes.

James Lewis: You don't wanna be second-guessing that.

Richard Feldman: I mean, the way I think about it is, something that's been pretty consistent in my career for the last, like, I don't know, 10, 20 years has been, like, trying to work backwards from the user experience I want. And for some applications, the performance is not a big concern there. It's just like, well, you're gonna be bottlenecked on the database, and the database is gonna be about as fast as the indices you set up for it, and that's kind of it. I guess if you wanted to...I mean, now I know enough that I'm like, okay, if you really wanted to, you could roll your own database alternative, that's highly optimized for your specific use case. But in a lot of cases, it's like, yeah, but people don't care about that performance difference. They're like, you know, they're gonna be waiting for the network anyway, so you're never gonna get sub-millisecond, you know...

James Lewis: And then probably wouldn't post the Jepsen test anyway, so, yeah...

Richard Feldman: Yes. No chance. Your hand-rolled database probably. But there's a lot of use cases that I can think of where I really do want, like, performance is part of UX, and it's a big part. And a compiler is absolutely one of them, where, I mean, I like Rust in a lot of ways. I'm very frustrated by the compile times, a lot of the time. Especially the caching. Like, part of it is, like, how fast is the compiler, but also, like, which work is it doing? And I, on some level, wish almost wish they didn't tell me this, but sometimes I'll be rebuilding my project, and I just made one little change. And it's like, "Hey, I'm recompiling your, like, you know, JSON crate." And I'm like, "What? I didn't do anything. What? Why are you rebuilding that? Nothing has changed." And I'm sure there's, someone can explain to me why it needed to do that. But there's...now that I have more of an appreciation for, like, "Oh, it doesn't have to be this slow. I don't have to be sitting here waiting for this," it's all the more frustrating when the tools I use, you know, whether they're compilers or otherwise, are slow and I know they don't have to be.

James Lewis: I think there's another, I mean, I think there's another reason, I think, quite serious reason, actually, these days, to think about mechanical sympathy, as well as user...user experience is a fantastic example. But I think, actually, the amount of energy we're using is a super, super important consideration. And if we can be more sympathetic to the hardware we're running on, then potentially we need less of it, right. And that can only be good, I think, given the state of the world at the moment.

Richard Feldman: I'm curious about how people measure that, because, so, we have, one of the Roc server projects that's in progress right now, we got a research grant to do it. We probably don't have time to go into the reasons why it's novel and interesting, but it does, like, interesting memory management behind the scenes. And basically, so, it never has, like, garbage collection pauses or things like that. And there was a researcher, who is not directly related to the project, but who became interested in it because of the question of, could this mean that you have servers that use less energy? Because, you know, garbage collector, in addition to slowing down your UX, also requires energy to use. And that then led to the question was, how do we measure what the difference in energy is? And I guess that researcher knows more than I do about that. But it's something I'd never even thought about, is, like, yeah, put a number on it. What number do you put on it? How do you measure that number? I have no idea.

James Lewis: It's certainly something we've been looking at in Thoughtworks, because it's sort of, there's some interesting design decisions around things like, even like how often you build, right? So, we're used to, you know, "Hey, we'll commit, and build, and deploy a thousand times a day." Well, that's actually pushing your code through quite a few potential stages of a pipeline. That's quite a lot of CPU cycles you're doing every time you're doing that. So, is there some, or there's gonna be a function, probably a U-curve optimization, where you're gonna be looking at, like, the optimum number of deploys per day, in order to, you know, when you go over that, you're gonna be, like, using too much, or more energy than you need to for the amount of value you're getting from the software and all of that. Anyway. I think, probably, Richard Feldman, we should probably call it a day there. We've covered an awful lot. So, thank you so much for coming along and chatting with us today. It's been brilliant.

Richard Feldman: Thanks for having me.

James Lewis: This is James Lewis and Richard Feldman, saying goodbye from GOTO Unscripted at GOTO Copenhagen. Thanks very much.