Home Bookclub Episodes The AI Engineer'...

The AI Engineer's Guide to Surviving the EU AI Act: Navigating the EU Regulatory Requirements

Larysa Visengeriyeva • Barbara Lampl | Gotopia Bookclub Episode • November 2025

You need to be signed in to add a collection

Hot take from Larysa Visengeriyeva & Barbara Lampl: Everyone fears AI regulation, but truth is most teams can’t even document, version, or govern their models. Forget hype- fix fundamentals first.

Share on:
linkedin facebook
Copied!

Transcript

The Godmother of MLOps

Barbara Lampl: Hello and welcome. It's go-to book club time. My name is Barbara, but that's really not the important part, because I'm here with Larysa. She wrote a great book. I'm really excited to tell you all about it while asking her great questions. So before we start, Larysa, tell us: Who are you? What are you doing? Where are you? Give us a rundown, please.

Larysa Visengeriyeva: This is the most difficult question. Let me try. I'm basically a software engineer with multiple years of engineering career. I have a PhD in data engineering. Currently I'm working on the intersection of military AI and defense tech. And right now I'm sitting in Ukraine.

Barbara Lampl: That's why we needed to close the window.

Larysa Visengeriyeva: Exactly. We had to close the window because the air alerts are too loud to interfere with this glorious podcast.

Barbara Lampl: That brings us directly into the book you wrote. It's kind of like from Ukraine war to the AI engineer's guide to surviving the AI Act. I want to start with my first and probably one of the only criticisms I have. I need to present this correctly: Larysa is and will always be my godmother of MLOps.

Larysa Visengeriyeva: Oh my God, you said this again. I still remember when you introduced me on stage last year at the Women in Data AI festival in Berlin. I was like, oh my God, now I have to fill that role. Thank you.

Barbara Lampl: You fill that role really easily. That's why I gave you that name. Being the one who's been in charge of developing strategies and implementing for many years now, my big pet peeve is always: where's my ops team? Where is it? I need it. I started over 25 years ago. There wasn't even data versioning and all of that. I feel how important ops is. That's why I gave you the title.

Barbara Lampl: My second criticism is that the title of the book is off. I'm really not happy with it. To our audience, if you think "surviving the EU AI Act" isn't your problem—that's exactly why I think the title is off. Cut off the EU AI Act part. Even if we go into it today and understand it from a hopefully different perspective than classic discussion, it's the AI engineer's guide to surviving the next 5 to 10 years, because we have a lot of work ahead of us. Larysa's book will give you the one thing you need to get through that.

But before I criticize more, I want to get into the backstory. How did we end up with this book?

Larysa Visengeriyeva: Barbara, I'm really grateful for this criticism. I don't see it as criticism—I see it as a compliment. These statements can be made only by a person who truly read this book. Thank you for reading it.

Back to your question: how we ended up writing this book. Before I wrote this book, I created ML-Ops.org, which became the Bible of MLOps. I'm the mother of MLOps. What I can do is write the Bible of MLOps.

In the meantime, there were already enough books about machine learning engineering—great books from people who really knew what they were talking about. A lot of books from Google engineers who wrote about reliable machine learning. For example, the book about machine learning design patterns written by Google folks—excellent books. I didn't have that urgency to write another book.

One day I woke up and went into my LinkedIn feed. I was quite shocked. Why was everyone screaming about the EU AI Act? It was a huge issue, quite emotional. I thought, okay, what is the EU AI Act?

Discovering the Engineering Problem Behind the Regulation

Larysa Visengeriyeva: After looking it up on the internet, it sounded like legal stuff. Boring. Whatever. Then I asked for a summary of the EU AI Act. I looked into this high-level summary and realized one thing: if you think about compliance with the EU AI Act, you can think about it as roughly ten steps you have to perform. Most are legal-related stuff, but the last two points are remarkable.

The remarkable part about compliance with the EU AI Act is that it's an engineering problem. The act mandates that if we want to deploy and put our AI system into the market, you have to provide data quality. You have to guarantee data and AI governance. You have to guarantee trustworthiness and ethical fairness.

Then I'm reading: you have to provide documentation. And I was like, oh my God. Finally, the law enforces people to write technical documentation for AI systems. At this point, I realized why people are freaking out about the EU AI Act. From what I saw with former clients and projects, documentation of processes, AI systems, data governance, data quality management, metadata management is literally absent.

For anyone who isn't aware, the EU AI Act marks every AI product with a CE mark, like every electronic product has this mark. You can see AI products like electronic products—they have to be safe and you have to provide quality. That's it.

How do you provide quality in AI products? We start with data, then we go into model engineering, then into the operational part and post-deployment monitoring. Basically, we're talking about MLOps—machine learning operations in a nutshell.

Barbara Lampl: That's the important part. If you're non-IT or haven't looked at the EU AI Act, it's a risk-based approach, not a technical approach. That's throwing people off because you have to classify by risk. If you're above a certain risk level, it's not deployable except in certain situations. There's no guidance like "you're good if you do just this algorithm." It's just a space for open architecture and discussion.

I feel you totally. Every time it comes up, it's like a forum of legal discussion. You're sitting there like, yeah, but legal is literally the last step. If we don't have anything else, how should legal handle it? I love my lawyers. The main lawyer who works for us even has his PhD in cybersecurity, but still, his take is: I'm a lawyer, I'm not an engineer. Even if you have technically advanced lawyers, they always say legal is the last step.

In every regulation, it's totally backwards. The legal comes in first and then we should figure it out. But where is everything else? That's the big takeaway: kill the EU AI Act part in the title. Today, you need all of that. You need MLOps, you need documentation. Let's be honest: because of missing documentation, I'm flying around the world to fix models. Guess what? No documentation, and my business model benefits from that.

Larysa Visengeriyeva: Don't tell anyone about that. Don't say that—don't read the books, otherwise my readers will rethink your business model!

Barbara Lampl: Oh no, I think there's enough room for the rest of us. That's the important takeaway for me. While reading it—funnily enough, I read it on the plane to San Francisco flying to meet with clients—nothing related to their work, but going through the book, it's like: yes, that's the correct way. That makes sense. That makes sense.

That's why I want to engage with you to say: hey, take out the EU AI Act part. It's a great book to really guide you through. When we talk about not just prototyping—and I think that's another big takeaway—this book helps you take your prototypes, your MVPs, to full production and scale.

Let's be honest: if you look at the current discussion, everyone is prototyping and nearly none gets to production and scale, right? Because guess what? It's an operational engineering question to get probabilistic systems, which are the baseline operating systems, to scale. That's the hardest part. It's not building the prototype. It's getting to full product scale. Without a guideline for how to do that, you will fall flat on your face.

The Power of Checklists and Structure

Barbara Lampl: After reiterating and hopefully getting everyone in on why you should buy this book, there's another reason. I'm not a mathematician by training—I studied finance and psychology—but I'm a mathematician at heart. I die for a good structure and a good clipboard with a checklist. Larysa must have read my mind. I stopped counting, but how many tables and checklists are roughly in your book? I'm loving every one of them. First, how did we end up here? And do you have an idea how many are in the book?

Larysa Visengeriyeva: Now I understand—the reason I wrote this book is because of this interview with you, Barbara!

Answering your second question: No, I don't know.

Barbara Lampl: Someone please read the book and comment below, and we'll get you coffee with us or something.

Larysa Visengeriyeva: After your discussion yesterday, we prepared for this interview. I was thinking about this book on the meta level. Basically, this book is about tackling complexity. How can we tackle something really huge, complex, and incomprehensible by taking parts of the whole? How can we eat the elephant?

Barbara Lampl: The elephant, the salami, whatever food metaphor works.

Larysa Visengeriyeva: Exactly—slice by slice. How do you understand something really complex? How do you use something really complex? By understanding the parts of the complex process. Surprise, surprise—I understand why process is so often absent in the industry. They're complex.

But luckily we have enough frameworks and methodologies already there to apply. To get an overview of something complex like a machine learning project or data engineering project, one of them—my favorite part—is the Machine Learning Canvas, developed by Louis Dorard. I was partially involved in the development process of the next version of the canvas. I exchanged ideas with Louis.

I think this is the easiest, most collaborative way to design machine learning projects and understand them. The application of this canvas works both ways. You can design a machine learning system from scratch, or you can take existing systems and try to understand what will be like a huge, scary black box for people who enter the project. You analyze it by asking questions about one concrete piece.

Larysa Visengeriyeva: When we talk about machine learning and AI systems, we have parts of the system: data engineering, or we start with understanding the problem.

Barbara Lampl: Understanding the problem is exactly the problem.

Larysa Visengeriyeva: What business problem are we actually solving? Why are we saying machine learning or AI? What's the business problem? What are we trying to solve, optimize, improve, make more efficient?

Then we go into the next part: data engineering. No data, no AI.

Barbara Lampl: And repeatedly: there is no AI without data, datasets, and data quality for AI. We should repeat that three times a day at least, please.

Larysa Visengeriyeva: For someone who did a PhD in data engineering and data quality management and developed error detection algorithms, you can imagine I know what I'm talking about. Still, it's always a surprise for people that data quality is essential for model quality. This is basically what the EU AI Act requires: AI products of good quality.

That should automatically translate to: good AI products need good quality data. What does it mean? Everything involved: formats, normalization, data error detection, error correction, whatever. It's a huge part. It will take longer. It will be more painful. It will take a lot of effort, but it's huge to guarantee data quality.

Plus we have all this data governance part, which is also an engineering problem but also an organizational and team aspect. The next part is model training. Then model inference.

You see all these parts involved in machine learning and AI projects. It's complex because machine learning projects, products, or models—let's keep it this way—machine learning models are always part of the whole software engineering system. They don't exist alone. They're always parts of a system within a system. This increases the complexity.

To decrease complexity and understand, we need frameworks. This is why I use these canvases. This is why for processes I use CRISP-ML.

Barbara Lampl: Cross-Industry Standard Process for Data Mining, now adapted for machine learning.

Larysa Visengeriyeva: Right. It was the follow-up process from data mining. At that point, data mining frameworks were still being used today. We had to adapt it to quality requirements. This was like, wow, this is a perfect fit for fulfilling requirements.

With this book, I took everything that was on the market—all the frameworks—and integrated them into the requirements of the EU AI Act.

Translating Legal Requirements into Engineering Practice

Larysa Visengeriyeva: One intellectual exercise I did within the book: what you read in all the articles about legal requirements—indeed, there are legal high-level requirements. The intellectual exercise in the book was to translate these legal requirements into engineering requirements.

Luckily, we have everything there. There's already academic research about quality attributes and translating legal requirements into quality attributes of AI systems. We automatically get the engineering practices for implementing AI systems.

Barbara Lampl: I want to reiterate that point. That's what gets lost in translation on many levels in any discussion I'm having. It's like: hey, whether we have a regulation called the EU AI Act or not, can we please all agree that we want to deliver good quality products—any machine learning or AI products? Then we need AI governance.

Then suddenly you're in a very weird discussion. I studied psychology, so yes, transformation of the whole organization is a topic. But we're suddenly in that weird space between governance and legal frameworks. I keep saying: that's engineering. It's a mathematical and an engineering problem. If we don't deliver on that, you can make up the biggest framework, but we cannot measure it.

That's not mentioned in the beginning. I think many people, especially corporations, are now suddenly entering the age of AI understanding: this is a math and engineering problem.

That's why your book has two target groups in my perception. One is non-technical leaders. Non-technical leaders of any department need to read the book to understand how to talk to the teams. The other way around: teams need to read the book to have proper understanding of what stakeholders want.

You won't find code in that book. I think engineers will be thrown off by that. But guys, Larysa and I don't know which stack you're working on. That makes sense. Most of us probably end up in the hype world, but that's not the reality of how your architecture looks, how your code stack looks, and how much technical and documentation depth you have.

Throwing code around isn't the answer. The answer is proper frameworks, and you will find them properly laid out for you. The book is the big missing part in my perception of translating—I always tell people data science is a team sport. Team sport means everyone, non-technical and technical stakeholders, needs to have an understanding of what we're doing here. But the big thing is we're talking in such different languages.

Larysa Visengeriyeva: Exactly.

Barbara Lampl: To keep that gap closing, we need proper language and proper checklists and frameworks. You'll find the three big frameworks: the ML Canvas, CRISP-DM, and SMART. These are probably the best. To spoil it a bit: CRISP-DM's original origins are from 1996.

Larysa Visengeriyeva: But it's battle-tested.

Barbara Lampl: It's battle-tested in the best way possible. It's gone through transformation to where we are today. That, I think, is the essence of the book. Read it because it will give you the language and the translation.

Larysa Visengeriyeva: You are the best advocate for this book!

In general, it's the approach of how to think about machine learning projects. It doesn't matter if you have to comply with anything. One message I tried to give my readers is: act proactively. Don't leave it as an afterthought. Take this framework, take CRISP-DM. Understand each phase, understand what you need, and just write it down. Or adopt and expand this checklist for you.

What you see is just the base. Take these and adjust for your project. This is what I wanted to give: a tool, a framework for people who want to tackle complex projects.

Barbara Lampl: Complex projects. This is the current state—we see it every day. We throw tools and techniques and models at problems, but a single tool never solves a holistic problem.

Larysa Visengeriyeva: Exactly.

Barbara Lampl: That's how it works. Especially in a world where data science always has a problem-data-model match that we need to compute. That means we need definitions for problem, for data, and for model. The missing gap in many of these is just quality.

That's why I'm telling everyone: ignore the EU AI Act in the title because it's about how to survive. I mean literally surviving. Part of my job is fixing stuff, which means someone developed something, the team moved for whatever reason—we're not that many people in this whole world—and then they're gone. There's no documentation, no track, no log, nothing. You have to figure it out, and reverse engineering doesn't even help anymore because of too many runs, too many retrainings.

You sit there and have to tell your clients: we have to build from the ground again. The missing link is always quality understanding. The missing thing is always documentation understanding. As much as I'm ambivalent about certain parts of the EU AI Act, I think that's one thing where organizations can really make a difference: be proactive with this stuff.

Larysa's book is a good helping guideline for everyone in the organization. That's why I'm iterating on the non-technical part. It can help you. We need good non-deep-dive technical leaders because you have the business understanding. Engineering makes funny stuff sometimes, and I can tell stories about what happens when engineers build models and how they think about how good the marketing and sales departments of their companies are. They have a very high perception of what they do.

Larysa Visengeriyeva: Things are so interesting with people.

Barbara Lampl: That's exactly the missing part. Sometimes it's the mismatch in the early beginnings. If you think about quality in every step—and thinking about quality in every step means compliance by design—your legal and general counsel will be happy. I think that's one of the things your book really brings out: we need to think of it as an engineering problem, not a legal problem.

Larysa Visengeriyeva: Exactly. Still, I'm not a lawyer. This book is not legal advice. But honestly, it's an engineering problem. You have to solve an engineering problem before you solve the legal problem.

I have to say that I really enjoyed writing this book. Everyone was like, "Oh my God, you're writing a book—my condolences." But this is the most rewarding part. I had a really great experience with O'Reilly. By the way, at the Women in Data festival we mentioned in the beginning, I had my technical reviewer also from the festival. It was a Wonder Woman project.

Barbara Lampl: Women wrote the book!

Larysa Visengeriyeva: Exactly. I met you because of the festival. It was a female project. The book was also partially written in Ukraine.

Barbara Lampl: Two different kinds of battle-tested.

Larysa Visengeriyeva: You nailed it!

I enjoyed the research I did regarding documentation. The most boring part about every engineering or software engineering project is documentation. Everyone hates it.

Barbara Lampl: Nobody does it.

Larysa Visengeriyeva: Right. Then I realized: wow, this documentation task is also an engineering problem. All the requirements you'll read in the EU AI Act regarding documentation—what should the documentation include?

When I was researching all this, I realized: oh, it's basically metadata. Metadata about your data pipelines. You have to implement data versioning, code versioning, model versioning. Pipelines, all the hyperparameters, when it was run, what data was used, what data quality, all the metrics.

There was my "wow" moment: if you ensure that you implement a solid metadata management system, you can automate producing proper documentation for your system. Voila. Surprise, surprise.

If you go in this disciplined way, you basically can't do anything wrong. It's not about the EU AI Act—it gives you all the requirements. But in a nutshell, at the core, it's about surviving, about making AI projects sustainable.

Barbara Lampl: I think that's a good point to wrap up our great interview. Thank you so much for your time. I would highly recommend buying the book as a hard copy because you will read it fast, but it will be your companion probably for the next year. You'll go through it with Post-its. Buy the hard copy and carry it around with you.

Thank you so much for your time, not only today but also for writing a book that I think is really a survival guide for the next years, especially for scaling everything everyone is currently building. I think there are some cool ideas for how you can optimize agentic systems and work with these concepts in the future. We're happy to see how that will play out in the real world.

Larysa Visengeriyeva: Thank you so much, Barbara. Thank you for being my interviewer, for being with me, and for reading this book.

Barbara Lampl: You're welcome. It was a great read on the plane to San Francisco.

Larysa Visengeriyeva: Thank you.

About the speakers

Barbara Lampl

Barbara Lampl ( interviewer )

Behavioral Mathematician