Don't fall into the trap of thinking that math-based analytics can predict human behavior with certainty.
We are a species invested in predicting the future -- as if our lives depended on it. Indeed, good predictions of where wolves might lurk were once a matter of survival. Even as civilization made us physically safer, prediction has remained a mainstay of culture, from the haruspices of ancient Rome inspecting animal entrails to business analysts dissecting a wealth of transactions to foretell future sales.
Such predictions generally disappoint. We humans are predisposed to assuming that the future is a largely linear extrapolation of the most recent (and familiar) past. This is one -- or a combination -- of the nearly 200 cognitive biases that allegedly afflict us.
With these caveats in mind, I predict that in 2020 (and the decade ahead) we will struggle if we unquestioningly adopt artificial intelligence (AI) in predictive analytics, founded on an unjustified overconfidence in the almost mythical power of AI's mathematical foundations. This is another form of the disease of technochauvinism I discussed in a previous article.
Science fiction author and journalist Cory Doctorow's article, "Our Neophobic, Conservative AI Overlords Want Everything to Stay the Same," in the Los Angeles Review of Books, offers a succinct and superb summary of technochauvinism as it operates in AI. "Machine learning," he asserts, "is about finding things that are similar to things the machine learning system can already model." These models are, of course, built from past data with all its errors, gaps, and biases.
The premise that AI makes better (e.g., less biased) predictions than humans is already demonstrably false. Employment screening apps, for example, are often riddled with a bias toward hiring white males because the historical hiring data used to train its algorithms consisted largely of information about hiring such workers.
The widespread belief that AI can predict novel aspects of the future is simply a case of magical thinking. Machine learning is fundamentally conservative, based as it is on correlations in existing data; its predictions are essentially extensions of the past. AI lacks the creative thinking ability of humans. Says Tabitha Goldstaub, a tech entrepreneur and commentator, about the use of AI by Hollywood studios to decide which movies to make: "Already we're seeing that we're getting more and more remakes and sequels because that's safe, rather than something that's out of the box."
AI, together with the explosion of data available from the internet, have raised the profile of what used to be called operational BI, now known as predictive analytics and its more recent extension into prescriptive analytics. Attempting to predict the future behavior of prospects and customers and, further, to influence their behavior is central to digital transformation efforts. Predictions based on AI, especially in real-time decision making with minimal human involvement, require careful and ongoing examination lest they fall foul of the myth of an all-knowing AI.
As Doctorow notes, AI conservatism arises from detecting correlations within and across existing large data sets. Causation -- a much more interesting feature -- is more opaque, usually relying on human intuition to separate the causal wheat from the correlational chaff, as I discussed in a previous Upside article.
Nonetheless, causation can be separated algorithmically from correlation in specific cases, as described by Mollie Davies and coauthors. I cannot claim to follow the full mathematical formulae they present, but the logic makes sense. As the authors conclude, "Instead of being naively data driven, we should seek to be causal information driven. Causal inference provides a set of powerful tools for understanding the extent to which causal relationships can be learned from the data we have." They present math that data scientists should learn and apply more widely.
However, there is a myth here, too: that predictive (and prescriptive) analytics can divine human intention, which is the true basis for understanding and influencing behavior. As Doctorow notes, in trying to distinguish a wink from a twitch, "machine learning [is not] likely to produce a reliable method of inferring intention: it's a bedrock of anthropology that intention is unknowable without dialogue." Dialogue -- human-to-human interaction -- attracts little attention in digital business implementation.
Once accused of looking too intently in the rearview mirror, business intelligence has today embraced prediction and prescription as among its most important goals. Despite advances in data availability and math-based technology, truly envisaging future human intentions and actions remains a strictly human gift.
The myth that math-based analytics can predict human behavior with certainty is probably the most dangerous magical thinking we data professionals can indulge in.