Many things that can go wrong in a machine learning project. We had a chat with Phil Winder, data scientist and founder of Winder Research, to dive into what he believes to be the cause of failure in machine learning projects, and whether there’s anything you can do to prevent failing in your own machine learning projects.
What does machine learning failure look like to you?
Phil: I don’t consider a lot of what other practitioners might consider as failures as failures.
With machine learning you can go on five different paths that take you on totally different routes, four of those will end up at a dead end: you don’t have the right data, you can’t build a model that’s good enough or you have regulatory problems, but one of those might work and provide a bit of value.
In a sense you’d say four out of five projects have failed, but you knew that would happen as you started since you're trying to explore what was possible. Therefore I wouldn’t call that a failure at all, it’s really a success and just the nature of machine learning projects.
In a software engineering context...
“If we had a bug in our software, we wouldn't call that a failure.”
The scariest thing for me that I would consider a failure is some sort of material impact on the company, or on me, or on the company's reputation. For example, a violation of GDPR, forcing the company to pay a substantial fine or bad PR, something has happened and people don’t like the results of the model and have taken to the media causing a big uproar about it.
All other failures are essentially bugs that can be worked out and are all part of the process.
What causes failure in ML projects?
Phil: A lot of failures can stem from not defining the problem well enough
If you have a very specific problem and you try to solve it with machine learning, and it doesn't work, you could call that a failure, but most business problems don’t turn up like that, they come up a lot broader, “We’re spending a lot of money here, what can we do to reduce that expenditure?” Or if you're a product based business, “the customers are struggling to do X, what can we do to make that easier?” Again, try five things for four of them to fail, but one will help your case.
Fairness and viability can also be the cause of failure in ML projects
You have the problem, you can have the data, you can have the algorithm, but when you get to the point where you're starting to show your results, the results that you’re generating are potentially unfair or potentially have unregalatory issues, or other reasons why we might not want to use this model because of some other business sensitive issue.
Even then it’s hard to call that a failure because we can go back and anonymise the data or fix the data or remove the things you’re not interested in, or change the problem slightly so that it is successful.
Is there a way to ensure zero failure in your ML project?
Phil: In short, no. If you think about it from a software point of view, “Is it possible to avoid all bugs in software?” No, it’s not possible to avoid all of them, you’re always going to have issues, but you can follow best practices, recommendations, rules of thumb, quality assurance,
checkpoints and tests to help reduce the risk of that happening.
Quality control your ML at various points of your development and that should catch a number of these “bugs,” Nonetheless, it’s likely some will still sneak through.
Want to learn more from Phil? Check out this GOTO Book Club episode with Phil Winder & Rebecca Nugent, How to Leverage Reinforcement Learning