Logo

The Data Daily

People in the Entire ML Lifecycle

 People in the Entire ML Lifecycle

Editor’s note: Keith is a speaker for ODSC Europe 2022. Stay tuned for more information on his talk on human-in-the-loop in the ML lifecycle!

I recently attended ODSC East 2022 in Boston. As a data scientist with a particular interest in human-in-the-loop machine learning, I found myself drawn to the MLOps and AI Observability sessions. I had been a bit naive about MLOps. Since it’s frequently mentioned alongside AutoML, I’ve encountered some descriptions that seem to deemphasize the role of the human, whether that be the tasks of the human data scientist or manual interventions to address data quality or exception processing. On reflection, I needed to surround myself with the quality of thought leadership I encountered at ODSC East. I got the opposite sense in the sessions I attended. The emphasis was not only on continuous improvement but on how to collaborate efficiently across multiple teams. I’ll be seeking more opportunities to learn from the MLOps community this year.

I should also acknowledge that I heard the phrase humans--the-loop from my colleagues at Northeastern’s Experiential AI program. They were in attendance at my ODSC East session. Having interviewed Usama Fayyad, their inaugural director, last year as part of CloudFactory’s thought leadership series, it doesn’t surprise me that they are trying to upend stereotypes about AI. I love what they are trying to communicate with this phase. There isn’t a clear consensus about what human-in-the-loop is. For me, it certainly includes both the manual labeling and annotation of supervised machine learning datasets and exception processing post-deployment. However, I agree with the Northeastern team that we should broaden it further.

So how might MLOps, AI Observability, and Human-in-the-loop all work together throughout the ML lifecycle? In short, quality control and continuous improvement identify shortcomings in our models, our data, or both. Once identified, those flaws need attention – and an automated model refresh is not enough. Human attention is required. If it’s a flaw in the model, a modeler needs to revisit the entire modeling process. If it’s the data, person-hours are needed. My compliments to Pachyderm’s team for emphasizing that the model and the data evolve on parallel tracks and that each needs its separate versioning.

The data scientists could periodically contribute a handful of hours to correct the data manually. It might even be desirable as it will help them with diagnostics. However, it’s not scalable. Without an external data processing strategy, either the internal team will get burnt out, or the work won’t get done. Critically, inconsistent labeling and annotation are so common with unstructured data that data quality might be the best to investigate first. We see this often at CloudFactory. Many of our clients work with unstructured data, including text, image, and video. They need the data quality that only a managed scalable workforce can provide. Without that level of consistency and quality, the modeling never goes smoothly.

Data Observability may be a new phrase to some data scientists. What is it all about? Observability is a concept from control theory that describes how well the state of a system can be inferred from outputs. This concept has been applied by software engineers for some time to monitor for issues in a software system. More recently, companies like Metaplane are doing so for errors in data. What I found to be such a powerful implication for human-in-the-loop is that some of the errors found in data will need manual intervention. A challenge faced by almost any data scientist that has worked in anomaly detection is that cases with a high risk score need to be verified, and verification is labor-intensive. Often the development of an anomaly detection model does not take this into account. Similarly, identifying errors in data should be accompanied by an error processing strategy.

I look forward to continuing my journey of exploring the ML lifecycle and more topics when I attend ODSC Europe in London in June. At the event, I will speak about human-in-the-loop in the ML lifecycle with examples of actual behind-the-scenes data annotation activity supporting obstacle avoidance for drones.

Keith McCormick serves as CloudFactory’s Chief Data Science Advisor. He’s also an author, LinkedIn Learning contributor, university instructor, and conference speaker. Keith has been building predictive analytics models since the late 90s. More recently his focus has shifted to helping organizations build and manage their data science teams.

Images Powered by Shutterstock