Holistic Engineering: Organic Problem Solving
Your codebase is a crime scene of decisions nobody made. Vanessa Formicola Formicola explains to Andrew Harmel-Law how career frameworks, reorgs, and human psychology infiltrate your architecture- and what to do about it.
About the experts

Andrew Harmel-Law ( interviewer )
Tech Principal at Thoughtworks

Vanessa Formicola ( expert )
Principal architect and engineer
Read further
Principal Software Engineer Vanessa Formicola introduces the concept of "holistic engineering"- the practice of making technical decisions while factoring in non-technical forces that inevitably impact code and architecture. Through patterns like the "solo coding event," "everything but the kitchen sink," and "space abuse," she demonstrates how human behaviors, organizational dynamics, career frameworks, and company processes leave traces in codebases that no one intentionally designed. Formicola argues that technical leaders must look beyond pure technical excellence to understand the organic system they're working within, advocating for honest conversations about constraints and realistic architectural decisions that account for human factors. The discussion with Andrew Harmel-Law extends to the emerging need for ethical engineering as our technical decisions increasingly impact society at unprecedented speed, with practical advice for teams to analyze their own systems through the lens of tech, people, and product dimensions.
What is Holistic Engineering?
Andrew Harmel-Law: Welcome to another recording of the go-to podcast. I'm Andrew Harmel-Law, my pronouns are they/them, and I am a tech principal at ThoughtWorks. I've worked with the person I'm going to chat with today when we were both at ThoughtWorks. I'll let them introduce themselves. Vanessa Formicola, do you want to say who you are?
Vanessa Formicola: Thank you Andrew. Welcome everybody, and thank you for having me here. I'm a principal software engineer, my pronouns are she/her. I've been a software engineer for quite some time. In my time as a software engineer, I've actually worn many different hats with different names and different responsibilities, so it's hard to pinpoint one. I would say I'm close to the technology, but also very managerial—like engineering manager—and also as a consultant when I help companies and clients. I usually contribute to communities, whether it's diversity, inclusion or knowledge sharing.
Andrew Harmel-Law: The thing I want to talk about is related to your experiences in all those different jobs. The thing we were doing most recently together was you were doing a talk about something you were calling holistic engineering. Do you want to quickly describe what you mean by that and maybe explain how it brings a lot of different things together that you've seen across your experience?
Vanessa Formicola: I refer to holistic engineering as the practice of making technical strategy or technical architecture, your technical decisions, while also factoring in the non-technical factors of your organic problem space. When you look at your problem space as one organic system, one unit, then you will see that whether you factor it or not, those decisions, those issues will have an impact on your architecture and even on your code.
You might as well actually embrace the fact that there are other forces at play that are not technical in your technical decisions and find ways to mitigate the influence, or even just simply factor it in. This comes from years of having to solve problems and actually finding that looking only at the technical angle wasn't necessarily enough to solve the bigger problem, because the consequences, whether it was patterns of behaviors I would see over and over, or whether it was trying to solve a problem and things going the wrong direction, simply making the technical decisions that we would do by the book wasn't enough.
Andrew Harmel-Law: What you're saying is that even when everyone thought they were practicing architecture and doing engineering, the end result, the code that you ended up with, had the trace of all these other forces that were possible?
Vanessa Formicola: Exactly. An example of this is when you go into your code base and you see decisions that have no foundations in technical good practices, and you speak with engineers and architects and they tell you a story of how things developed in a certain direction that has nothing to do with what we read in books or what your engineers would talk about in interviews as good practices.
What you notice is over a period of time, people will make decisions pushed by forces that have nothing to do with technical choices. One classic example that I see over and over is when you go to a new company, you review the code base, and you see that for very trivial features, your developers have actually fit in every single design pattern, every single feature of a library, all the possible libraries that are fashionable at that point in time, so they have a PR to showcase that they know all those things.
There are millions of these patterns, and some are technical and some are not, but usually this one shows a trait of human behaviors that have nothing to do with technical best practices—like needing or wanting to showcase breadth of skills instead of favoring simplicity or under-engineering instead of overengineering.
There are many behaviors that you will see in your code that are actually the history of decisions made by people. Big sweep changes like a reorg, like a difference in your career framework—things that are completely unrelated to what you would study in your architecture books actually impact things.
Imagine that you're doing a reorg and you're not taking into consideration the alignment between domain, your team and your architecture. You end up in an alignment of these three parts which is not the ideal one. What you will see very shortly after are problems where those intersections have not been laid out correctly.
This is in a good case when people would actually be thinking about these things. Usually what happens is that you will look at the history of all the architecture and see at the point in time the consequences of maybe a change of leadership that was more or less toxic, maybe a misunderstanding, maybe a difference in funding.
You will see things that will actually change the course of your architecture. I can think from the top of my head a client that I worked at when I was a consultant that had three layers of strangulation patterns over a period of 20 years, and you could clearly see what were the consequences of never terminating that work and what were the consequences of doing that. None of that was a by-the-book choice. None of that was the mistake or the choice of an architect, of an engineer. That was the result of many different choices that were made at different levels, and that had a direct impact on how things were then taken over a period of time and how they were supported.
Patterns in the Wild
Andrew Harmel-Law: One of the things I think is fun about your approach—you've got some names for some of these patterns. Do you want to share some of the names? Because patterns are cool, but their names are the things that people think about, so they're quite descriptive.
Vanessa Formicola: The one I was mentioning before about these code bases that are overengineered, especially when showcasing, I call the "solo coding event," when your developers feel the need to showcase all of their skill in every single PR, in every single piece of a feature. You end up with this code base which is unnecessarily complex, not for good reasons. Usually this is a trait that is associated with some not well thought through endorsements from leadership or incentive frameworks that have nothing to do with technology.
Another one that I call "everything but the kitchen sink"—for those not familiar with the concept, everything but the kitchen sink is an English-speaking country idiom which refers to putting everything into a group of things, and you say you're missing only the kitchen sink. That often happens when companies have things like a library, which many companies would call common utilities, that is used by all the services of your company. All people from different teams will add little features to it. What will happen over a very short period of time, surprisingly, is that this library becomes absolutely unmanageable because nobody knows what's there, there's going to be duplication because people will want to add small little things, there's going to be a number of blast radius issues.
The multi-layer strangulation that I mentioned before, it's like a constellation of patterns. There are patterns which are about identifying, like actually understanding your domain, or having like a crisis of conscience where you don't understand the domain. There are many different layers of looking at this problem. It could be from a product perspective, it could be from a people management perspective. But at the end of the day, you will find some things that happen over and over.
Another thing that is very common is the "space abuse," where you will see tools that have not been designed for certain behavior that are used over and over. A classic example has been relational databases used for everything, like even as a queue. I've seen this in multiple companies, by the way, just because there was a bad relationship between operations and developers, or simply they decided not to invest in infrastructure at the time. There are historical reasons and patterns of behavior of humans that have nothing to do with the technology chosen, with the domain of the company or nothing of that sort that will reappear and will be clearly decisions at a high level that impact at a low level, at your code level.
The Domain Library Problem
Vanessa Formicola: One of the most challenging to solve is the one related to domain. When you find that you have something like a domain library, or when you find yourself speaking with people about a domain library—a library shared by multiple services in your company which defines the domain of your problem space—it's very unlikely unless you have a very small problem space that you have one domain to represent the entire domain of your company. You usually would have subdomains to level out common understanding and the language I'm using for it.
What happens is that you have a library which represents this domain that somebody uses one domain for the entire company, and this is enforced by actually using this library. All the services will become—first of all, you would have a very dangerous blast radius problem. If something goes wrong, if there's a mistake, or if you have to change something, your domain impacts potentially all the services in your company.
But most importantly, the most insidious problem of this is that everybody will think that there is one domain, and they will just add all the information that they need from that perspective instead of modeling the domain of their own problem space. Even if you have more enlightened people or more willing people who would be willing to design the subdomain correctly, align the architecture for a subdomain, potentially even align the teams—because there are many good willing people in management that would like to do that—they have hard constraints, which is the usage of a library that is imposed and that has most of the functionality. People will not be able to move away from that.
There are many ways of getting this wrong, and there are many ways of also picking up on what I call "phenomena"—phenomena that you see in nature, in your system. You can notice things like people using different names for the same thing and wonder why and what's the difference. You will find that your code, it's very unlikely that it will represent your domain better than reality if people are not talking about it correctly. There are many ways for you to pick up on whether there is alignment on your domain or not.
This is at the heart, and this is a very insidious problem. The domain library I mentioned before—potentially there's no way around it. When you spread your domain in the entire architecture, it will require a significant amount of investment for you to break this.
Learning from Modern Thinking: The Gap Between Theory and Practice
Vanessa Formicola: Now we are lucky that there are so many interesting thinkers that have written books that help us understand things. Even with the goodwill that we have and the things that we learn—we learn from Accelerate a lot about the performance of a company. We want to have decoupled architectures. From the practices that we want to align our software with our domain, we learn from Team Topologies that we want to align all three of them together for flow—people, architecture, and domain.
Even if you have all of this in mind, but if you have hard-coded dependencies in your code and as an engineer you don't have the authority or power to redesign your entire company and architecture, you will end up having—in any case, even if you have the best developers, best architects, best people in the world working for you, they will not be able to unblock this.
When we talk about holistic engineering, we look at whether we are actually designing our system in the best way that we can. Are we giving people the right tools to make the decisions that we need? Because if we make those decisions together—I'm not saying that everybody should know everything about everything in their company—but if we can make those decisions together or at least factor in decisions to the level that we have authority, we would be able to make technical decisions that are more resilient over time and that are not impacted by forces that are not the ones that I usually design in my architecture diagram.
Taking a Broader View: The Challenge of Organizational Forces
Vanessa Formicola: I think this is the big leap of faith for many people. The big leap of faith is: should we actually concern ourselves with problems which are external to our technical practice? Should we take responsibility outside our role? This is very controversial because most companies will teach you that there is a career framework and we should be within that. That's good practice and that's what's actually enforced. I say this as somebody who has been an engineer and manager—that's the practice that is enforced. You want your people to align to the hard work that was done to design your career framework.
The problem is that the career framework is a static tool, and the nuances of individuals and the organic nature of the company of the system doesn't match with the static nature or the couple of sentences that describe the role differently.
If we can't look at our roles as decision makers, as engineers, looking from multiple perspectives or at the very least factor in the perspective of our peers, we won't be able to have technical decisions that actually are made in practice. How many times do we go from "this is what we've designed" to something at the end of the journey that looks nothing like it, not because we were involved, but because there were forces that brought your developers in a different direction?
We should look at what we can do to actually accept that there are forces that we don't control, mitigate what we can, change what we can, and just accept what we can't. But at least build realistic plans and realistic architectures that reflect what's possible within our companies.
Recognizing the Slippery Slope
Andrew Harmel-Law: That's really interesting. It reminds me of the book by Adam Thornhill called "Your Code Is a Crime Scene." What's interesting about what you're saying is that it's very easy to slip into lots of these situations by accident or without intentionally doing it. Nobody is trying to create these things that cause everybody problems, but that's where everybody ends up because of these external factors. But what was interesting about what you just said was that noticing they caused problems isn't difficult either, but then it usually takes a lot more power or organizational leverage to get out of them—to do a big piece of work to do some refactoring or to do a strangulation and complete it. Do you think focusing more on the slippery slope that gets you into these messes is something that would help people? If you're more aware of getting into them in the first place because of these surrounding forces that you're talking about, then it might make it easier if people are trying to avoid them in the first place, as opposed to just falling into them by accident.
Vanessa Formicola: This is an interesting question because I think there are multiple ways to approach the problem. I think this is not the kind of problem that could be potentially fully eradicated. I think it could be significantly mitigated. But I also think most companies would start with something that they already have. Not many people start the company from scratch, an entire project, an entire architecture of their company from scratch.
I think what people might want to do, first of all, is to actually accept the concept that these kinds of things could happen. To accept that there are other factors in addition to technical ones affecting decisions. I am looking specifically to technical leaders, architects, people who think—myself included many times in my life—that you make a design and you talk about it and people will actually implement it. You talk about that, but there are actually other forces that will influence how people behave that have not been factored there.
Usually people don't factor these in because they are unpalatable conversations. They're about human choices or dysfunctions or even characteristics—they don't all have to be negative—of other departments or simply the combination between a good choice in one context that doesn't apply to another one.
Strategies for Recognition and Mitigation
These are very difficult conversations that often we don't want to have as technologists. We don't want to go and say, "Your career framework is actually impacting the design choices of my developers." Lots of people will have lots of interesting things to say behind my back. But the actual truth, in my experience at least, has been that the way people will make decisions is influenced by a number of forces that are well beyond what I can do in my architecture diagrams. Understanding where they are in life, in their personal life, what's their relationship with the company, understanding how the company is treating them, understanding even what the career progression is or what the relationship between people in the team is—just touching on certain elements is going to make a difference.
Going back to your question: what we can do about it? We can look at identifying the dysfunctions that we currently have and trace back and see what is the force that brought me there, and see if that force is still active. Can I mitigate it from there? This is when you already have something—this is a way of reverse engineering. You have this service that is behaving in a certain way and maybe nobody made the decision at that point in time. Maybe you can go through your git history and see whether there was a decision based on that, and when you probably see that nobody has decided that but that appears, that is part of your architecture. Then you can probably try and figure out how that happened and then reverse engineer.
But then if you are in the lucky position of starting a new technical strategy or even a new project in an existing context, what you want to do is to actually examine more than your code. You have to examine the players, examine your stakeholders, examine all the other forces of your system that will have an impact.
For example, and this was especially when I was a consultant, I would always make sure that I understood when the review cycle was and when the review cycle happened. I would try to understand what were the dynamics in the product org and what were potential pivots, what potential shifts in product, what were the things that might actually impact—positively, negatively, but in any case—the delivery of my project.
These are the kinds of things that architects, technical leaders, developers in their teams can try and understand to some extent within the scope of what they can do. Understanding things like the direction of your product, how your product fits in the market, understanding what are the relationships with stakeholders will give you an idea of what parts of the architecture could be more flexible and which ones need to be more static, where you can invest more, and where you can invest less.
There's a direct connection between the choices that you make in terms of resilience or performance with so many different technical choices that are dependent on behavior of individuals in areas that are not yours.
One of the most important things—because these are the advice I just gave are more about what practices do about this—but there's the biggest shift, in my opinion, is accepting that you have to look beyond your role and you have to try and convince others beyond your role that this problem needs to be solved, if not as a group, organically, like looking at the various aspects together.
When you present technical architecture, you're not just presenting what's the best technology for the problem on a piece of paper, or any digital diagram you might have. You're looking at the problem for that company at that point in time, with those people, with those skill sets, in that position in the market. You have to include that information, because for sure—and I am looking forward to be surprised for a second if that's not the case—in my experience, you will find that what you can afford to build is very different from the best practices architecture that you want to present in your discipline.
You will find that actually the skill sets in the company are not what you think they are. The alignment between your architecture and domain is not where you think you are. Your code quality is not what you think it is. If you don't have a deep understanding of all of these elements, you won't be able to design something that is realistic for you. So you will design something aspirational, and you won't necessarily be hitting your goals or designing even something that resembles that, because instead of starting from something feasible that your company could do, you started from aspirational because everybody thinks that that's where your company is.
That's one of the hardest parts—having the tough conversations in which you actually have the intellectual honesty to help others have the intellectual honesty to understand where you are in that journey, what are the dysfunctions, what are the characteristics. Again, none of this has to be negative, but it might impact people in a way that you didn't intend, then you need to adjust for it.
That is the biggest leap. The biggest leap of faith is to actually be willing to say the truth about the situation, be able to demonstrate this truth, communicate it, bring people on the journey to show what the actual situation is, and then convince all your stakeholders what is realistic to build compared to what you would like to build, and then ideally find an agreement that will take you from something that you can build today to what you would like to have. And have all the conversations about why you can't have what you want today because of all the dysfunctions that you have.
This is an opportunity for you to have conversations that decide where the money will go. Is it to improve something that today's stopping you from having the ideal scenario? Unfortunately, I've seen too many companies not having the pulse of what the maturity actually is, sometimes even just with the intent of motivating people. But often what happens is that people invest in things they can't afford—professionally in terms of skill set, point in market, funding and many different variables—and they find themselves worse off than if they had invested in actually growing that maturity.
Having the tough conversations and being realistic about it—this can't be done by one person in the room, you have to have allies. You have to have people who will collaborate with you. But that is the key thing. Accept that you will have to have hard conversations, accept that you will have to factor in behaviors that are not necessarily super professional, but they're more human than technical. Consider how you want to deal with those and have open conversations about it, if what you're looking for is reliability of project delivery over aspirational technical excellence in quotes.
The Connection to Architecture Decision Records
Andrew Harmel-Law: That's really interesting because it makes me think of a few things that are all tied together. So the first thing is, there's a quote from Alberto Brandolini, which was one of the three things that prompted me to write my book. I'm going to get the quote wrong, but it's something like the thing that gets shipped to production is the developer's assumption about what everyone has designed, as opposed to what might have been designed.
Like you say, it's the reality—the reality of what ends up in production is not necessarily what everyone hopes is in production, but the real architecture is the one in production. Then I think at the same Domain-Driven Design conference, I had a conversation with Rebecca Wirfs-Brock about ADRs because ADRs is something that I'm obsessed about and architecture decisions in general.
Rebecca told me—and I need to read it properly because I'm ready for some work—some studies that they've done around including an additional factor in ADRs and the factor that they were including in ADRs was getting people to record how they felt about the decision. Super fluffy, like really like the furthest away from code that you can get. It's like, how do you feel? Do you feel happy or sad or scared or whatever?
Rebecca and I think some of her collaborators had done some work and they found that even tracking this tiny thing gave a really good kind of maybe started opening up the space that you're talking about. There's all this technical stuff we love to talk about, and then there's all of the other stuff which we don't talk about, but which has probably as much, if not more of an effect on our code.
Starting Small: Team-Level Analysis
Andrew Harmel-Law: Do you think—there are a lot of sources for these problems, but not all of these sources apply to all companies, so there's obviously a piece of work that goes on where you try and diagnose the things, the factors that are affecting your code. Maybe your career framework isn't affecting your code at your company because it's a good career framework, and it rewards delivered product value as opposed to using the new cool tech. Or there might be executives who love going fast and hate waiting for things to be done properly, which we might have seen quite a few times. That's a standard pattern. But do you think—if people wanted to start small, maybe just in their team, what would be some approaches or ways of thinking to see if you can find out what's affecting or causing problems in your part of the code base? That's a good place to start, right?
Vanessa Formicola: That's a good question, because I think being such a complex problem, it's hard to tackle as a whole. But also that's the way around it. When you look at the microcosm of the team, I think the best thing you can do is to start in a couple of dimensions.
First of all, you want to look at what are the processes you're sitting on top of. What are the organization processes that are impacting your team? If you took your team and moved to a different company, what would change? Those are the processes. Some companies have deployment in production—the way you deliver your software to your customers might be a company-wide service the company can modify. Career framework is one of those things that is usually company level, not team level. Ways in which people have shared information in the past and patterns of habits of behavior—some companies have chatty teams that will be communicating, some companies will have the technology leads do that. Try and understand the things that are common to the company that your team, even if you're building a new team, will have to sit on top of because that's going to give you already constraints.
Then you have your triad of dimensions: tech, people, and product. Again, the ideal perspective is to have a full understanding of the mission and strategy of your product overall. But within even the subset of the product, trying to have an understanding—how much do you actually understand that? This can sound trivial, but actually you will find that if you dig deep, the dimensions of different perceptions of people will surprise you.
Alignment on what your product subset is, where it's going, what is static, what is dynamic—I worked in healthtech for quite some time in different areas in my life, and you will find that some medical concepts stay the same, and actual product concepts change the difficulty of you providing different things. That's a good example of understanding things that are more static and you have a different type of investment with things that are more dynamic.
First of all, understanding the product aspect, how well your team understands the product, including your product manager—how well the product org and the branch of the product org that sits within your team actually is relaying the information across the system, trying to understand that flow and how much your team is actually clued in with the product perception and understanding you have. Domain modeling from here to the end of the day consistently, whether digitally, whether in real time, in real life, whether you work in the same location.
Then you want to have an understanding of your engineering, which again sounds trivial, but I find that often the more senior you are, the less you have time to understand the impact of the changes you would make at the developer level. What are you doing to your developers when you decide on a diagram whether you change service A or service B? The deep understanding of whether your domain has been spread across your services, the level of quality of your code, how legacy it is, if it isn't at all, what's the level of testing that you have there?
This is information that very rarely gets factored into your architectural decisions at a high level. You would usually decide basic things or other very valuable things. But there are some very concrete, practical things that you can learn about your system, working side by side with people. However you have received that information, that will make a drastic difference in understanding the impact of your changes. Because what looks good on the diagram is not necessarily the same experience developers will have when they have to change a service.
Imagine a service that maybe is well designed, but hasn't been tested for the last ten years, and nobody wants to change it. I've seen that more times than I care to count. Understanding whether it's readable, understandable, whether it's written in an old framework compared to now, and hence putting some sort of weight towards what is going to be the cost for people to actually change this, even if it is the right decision from an architectural perspective.
Another engineering dimension that I find is undervalued is: do you actually understand the skill sets of the people who will do this work? By this I don't just mean how many seniors you have, how many Java developers, but understanding the maturity of the approach because you will be able to make some decisions, but hundreds of decisions are made every day by the team members and you need to understand if you have the right maturity there. It's not necessarily just understanding of the code base—the pragmatism to have these kinds of conversations that we're just having at that level. That's also important.
It's one of the most difficult things to actually analyze, but you want to analyze your people like you analyze your code to some extent. Are the people, are the key players of your project within your team—is this project aligned with their career progression? What is going on in their lives? Do we need to pair somebody with them because they are valuable but maybe they are silos of knowledge, or maybe they might just quit the company?
Those kinds of understanding will give you safety in managing things and also supporting the people in your team. You want to take care of your code, you want to take care of your people. I don't think we usually think of that together.
These are the three main dimensions I would say, on top of understanding what your organization will impact, how it will impact you.
The Future: Ethical Engineering and Changing Technology
Andrew Harmel-Law: So what's interesting to me now, and the reason why we're having this conversation is because it feels like—well, even if we don't mention Gen AI and LLMs and stuff—software is changing more and more rapidly all the time, and the architectures and the practices and all of these different things. Then you introduce AI into it and it gets even more rapidly changing. Are there any other conversations that you're starting to see people having, like you've been to a lot of conferences and doing talks and stuff? Maybe not directly in this area, but things that are related that you're interested to hear more about, and I think that it might change the way we think about software? Maybe things that people could go and check out.
Vanessa Formicola: To me, the thing that we don't talk about enough, or we don't talk about it in terms that I think will make it clear—I think we are getting there from multiple angles—is ethical engineering. Looking at the decisions that we make and the impact it has beyond the bottom line. Disciplines that are more ancient than ours have structures or codes of conduct that are tied to the impact of their actions and the kind of behavior they should have.
I think that when it comes to things we use, we think of security and privacy as second-class citizens, like something that we have to tick a box from a different department. When we think of machine learning, all the information we retain, the impact of so many of the tools that are so popular today.
I think one thing that is missing in the conversation that I see emerging—when we start talking about holistic systems, holistic engineering, even just the impact on the environment—these are people who are thinking, "Hold on, there's more to what we do than technical architecture." We receive impact from the world, we have an impact on the world.
I think at some point this will translate into: How is what we are doing today impacting humanity to some level? Is what we do ethical? Do we have a code of conduct for making decisions in a certain way and prioritizing things in a certain way? That will be mandatory, because what we do is incredibly impactful in our lives.
I grew up without a mobile phone until my teen years, and I don't think new generations have that experience anymore. I think it's important to understand that what we are shaping today is really changing the lives of people at a speed that not many generations have seen before. If we don't understand the impact that we have in society, and if we don't self-regulate the way we behave ethically, this could cause serious issues. We already see the signs of that.
Code as a Mirror
Andrew Harmel-Law: I think that's really important. Like you said at the start of the call, it's not just something that's important for human beings—it's because this stuff gets into the code, whether we like it or not. These ethical considerations and the second and maybe third-order impacts aren't just things that happen to us as human beings, but they have an impact in the code. When they're in the code, like you said, they can slide into the code quite easily without us noticing it, but when we try and remove them from the code, it's incredibly difficult and takes a lot of time and effort.
Vanessa Formicola: If I may make a suggestion for anyone who's looking at this and thinking there might be some truth here: I would recommend going through your code and seeing whether there are things that you never decided to put there and see how many you find. Then if you find people who haven't made that decision either, ask where they come from.
These are outputs of our minds, of our collective minds, and you will find behaviors and things that nobody can justify. You will see that they come from somewhere. You want to look for forces that are outside of your control that you want to factor in to be able to have control over your code and architecture.
Andrew Harmel-Law: Thanks, Vanessa Formicola. That's really something to go and think about and something to go and try. Thanks, it was a really cool conversation and I hope everyone else enjoyed it too.
Vanessa Formicola: Thank you, thank you, thank you.