Now that machine learning is becoming more and more mainstream, some design patterns are starting to emerge. As the CEO of CrowdFlower, I’ve worked with many companies building machine learning algorithms and I’ve noticed a best practice in nearly every successful deployment of machine learning on tough business problems. That practice is called “human-in-the-loop” computing. Here’s how it works:
First, a machine learning model takes a first pass on the data, or every video, image or document that needs labeling. That model also assigns a confidence score, or how sure the algorithm is that it’s making the right judgment. If the confidence score is below a certain value, it sends the data to a human annotator to make a judgment. That new human judgment is used both for the business process and is fed back into the machine learning algorithm to make it smarter. In other words, when the machine isn’t sure what the answer is, it relies on a human, then adds that human judgment to its model.
This simple pattern is at the heart of many well known, real-world use-cases of machine learning. And it solves one of the biggest issues with machine learning, namely: it’s often very easy to get an algorithm to 80 percent accuracy but near impossible to get an algorithm to 99 percent. The best machine learning lets humans handle that 20 percent since 80 percent accuracy is simply not good enough for most real-world applications.
Self-driving cars are a great example of what we’re talking about when we talk about “human-in-the-loop” computing. Smart people have been working for many, many years on making cars that drive themselves and the state of the art tech is actually very good. But very good isn’t good enough. Ninety-nine percent accuracy means that people might die 1 percent of the time.
Tesla recently launched an automating driving mode that followed exactly the human-in-the-loop pattern. The car mostly drives itself on highways but it insists that the human keeps their hands on the wheel. When the machine learning vision system senses that it has doubt about what’s going on – maybe there’s construction, snow, or something unusual on the road – it hands the control back to the human driver. So while the car can indeed drive itself almost at all times, it needs a human failsafe. Considering the possible consequences, you can understand why.
Facebook’s photo recognition algorithm has gotten extremely good. In fact, when you upload photos it can often not only find faces but actually guess who the person at a little over 97.25 percent accuracy.
But in cases where its confidence is below a certain threshold, Facebook will ask you, the uploader, to confirm the person labeled in the photo. In cases where the confidence is even lower it will ask you to label the photo yourself. All of this data is fed back into their algorithm to make it better.
Once, when you deposited checks in an ATM, you had to tell the ATM exactly how much you were putting in. But with massive advancements in optical character recognition (OCR), generally, your ATM uses a visual algorithm to understand not just the check amount, but other pertinent information on it, like routing numbers.
These vision algorithms have come a long way, but there are still cases where handwriting is funny or the language is unusual. In those cases, your ATM will ask you to key in the amount and the check itself is flagged for a human to look at (this is why some checks take a little longer to clear than others). And, as with the Facebook example above, you entering in a specific value gives that visual algorithm more data to learn from when it sees another tough-to-read check.
When Deep Blue beat Kasparov many years ago, it was in a major victory for A.I. Since then, of course, chess computers have become more and more dominant. While the concept of chess being “solved” is still considered rather remote (there are at least 1043 board positions to account for), chess computers now routinely beat grandmasters, even when those masters are given considerable handicaps.
But a subculture of people have continued to play what they call “Advanced Chess.” Advanced Chess is a game in which a human operator/chess expert works with a computer to find the best possible move. Computers are great at reading tough tactical situations but are still not as good as humans at understanding long term strategy. The best Advanced Chess use computers to limit (or eliminate) blunders while using their intuition to force the opposing team into unusual board states the computer hasn’t much seen.
It means that human-computer interaction is much more important for artificial intelligence than we ever thought. In each case: chess, driving, facebook and ATMs, making sure computers and humans work well together is critical for all of these applications to work. Notably, however, there’s a different interface between the computer and the human in each but it’s the pairing of humans and machine–not the supremacy of one over the other–that yields the best results.
Artificial intelligence is here and it’s changing every aspect of how business functions. But it’s not replacing people one job function at a time. It’s making people in every job function more efficient by handling the easy cases and watching and learning from the hard cases. Which is to say: We don’t wake up one day to find self driving cars – we slowly cede driving functions one piece at a time.
This article is published as part of the IDG Contributor Network. Want to Join?