Estimated reading time: 4 minutes
Artificial Intelligence (AI) Adoption in companies is not about replacing humans with machines. Humans and machines have unique capabilities and their synergy is what makes the AI adoption most powerful and impactful. Infact a research involving 1500 companies has shown that “firms achieve the most significant performance improvements when humans and machines collaborate”. As practitioners, we need to understand how to design AI systems to get the most out of this close collaboration. This concept is being pursued in the AI field as Human-in-the-loop Machine Learning with a goal of combining the best of both worlds. Active Learning, an up and coming concept in Machine Learning (ML), is contributing immensely towards this goal.
From static to active learning
In a traditional machine-learning workflow, practitioners usually:
- gather data;
- analyse and preprocess the data;
- build and train a model;
- assess the quality of the result.
Analysis of the quality of the model often leads to further iterations of one or more of the previous steps. When the model achieves a satisfactory accuracy , we consider the learning phase completed. The model can be put in production and used with real time data. What is wrong with this approach?
Let’s consider schooling as a metaphor. We learn important theories, concepts and skills. But learning doesn’t stop at the end of our curriculum. By facing new situations in the real world we continue to learn, especially from the most uncertain and complex situations. What if we could exploit this continuous learning for ML models as well?
This is exactly what Active Learning is all about. The key idea of active learning is that a model can achieve greater accuracy with a continuous learning approach if it is allowed to choose the data from which it learns. Typically a model in production receives input instances and provides answers about it. It acts as an inference engine. A model equipped with active learning , instead, may pose queries, usually in the form of unlabeled data instances to be labeled by a human. The model is allowed to be “curious” and the human is there to shape that curiosity. The model gives answers when it’s sure and confident about what it knows, but it’s free to ask questions to the human whenever it is in doubt or wants to learn more. But how does the model choose the right questions to ask?
The simplest strategy could be based on Random Sampling. Every now and then the system selects a random input instance and asks the human to label it. There’s no such thing as a stupid question, but we should maximize the benefit of having such an interaction between model and human.
Uncertainty Sampling (Exploitation) is an alternative strategy. The system identifies the input instances for which the model is uncertain or in doubt and asks for help. In a classification problem, these instances are typically near a decision boundary. We would like to build non-presumptuous models: if you don’t know the answer, ask for help.
A third strategy is Diversity Sampling (Exploration). The system selects input instances that contain rare or unseen combinations of feature values. The model hasn’t “studied” enough to fully understand these instances.
Ideally the best solution is to combine these three methods:: the model (1) selects random instances to make sure to behave correctly, (2) asks for help whenever it is in doubt with complex input instances or (3) wants to know more about rarely seen combinations of features.
Implementing the right sampling strategy is one of the many steps required to build a valuable Human-in-the-loop ML system.
Shifting from perfection towards continuous improvement
The interaction between humans and machines is the main value that we should focus on. Adding Active Learning, we get the most out of this coupling.
First, we get more transparency. The required interaction makes the model behaviour clearer. We will know what the model knows, and what are its doubts and weaknesses. Next, we will gain a better augmented dataset: after several iterations of this active loop, we will have new data to re-train the model and we may boost the performance. Finally, it could shift pressure away from building “perfect” models. By incorporating humans and interaction into the loop, the model is guided and corrected throughout his “life” and is exempted from getting everything right all at once.
Andrea Minieri is a Machine Learning Engineer at clearbox.ai. In this blog, he writes about Explainability, Monitoring and Active Learning.