Logo

The Data Daily

AI & Machine Learning Black Boxes

AI & Machine Learning Black Boxes

When something goes wrong, as it inevitably does, it can be a daunting task discovering the behavior that caused an event that is locked away inside a black box where discoverability is virtually impossible.

By Colin Lewis (Robotenomics) and Dagmar Monett (Berlin School of Economics and Law).

The black box in aviation, otherwise known as a flight data recorder, is an extremely secure device designed to provide researchers or investigators with highly factual information about any anomalies that may have led to incidents or mishaps during a flight.

The black box in Artificial Intelligence (AI) or Machine Learning programs has taken on the opposite meaning. The latest approach in Machine Learning, where there have been ‘important empirical successes,’ is Deep Learning, yet there are significant concerns about transparency.

Developers acknowledge that the inner working of these ‘self-learning machines’ adds an additional layer of complexity and opaqueness concerning machine behavior. Once a Machine Learning algorithm is trained, it can be difficult to understand why it gives a particular response to a set of data inputs. This, as we describe below, can be a disadvantage when these algorithms are used in mission-critical tasks.

Furthermore, the fact that Machine Learning algorithms can act in ways unforeseen by their designer raises issues about the ‘autonomy,’ ‘decision-making,’ and ‘responsibility’ capacities of AI. When something goes wrong, as it inevitably does, it can be a daunting task discovering the behavior that caused an event that is locked away inside a black box where discoverability is virtually impossible.

As Machine Learning algorithms get smarter, they are also becoming more incomprehensible.

Machine Learning algorithms are essentially systems that learn patterns of behavior from collected data to support prediction and informed decision-making. These Machine Learning systems typically process data in two explicit areas as described by Rayid Ghani, Director of the Data Science for Social Good Fellowship, who indicated that: “the power of data science is typically harnessed in a spectrum with the following two extremes:

To reflect these two extremes of knowledge gathering and automated decision-making, Machine Learning systems typically cluster into two types:

However, it has been well documented that the design and build of these Machine Learning black boxes can lead to bias, unfairness, and discrimination through programmer and data choices. “The irony is that the more we design Artificial Intelligence technology that successfully mimics humans, the more that AI is learning in a way that we do, with all of our biases and limitations.”

While there are other issues such as concerns about the quality of the data and its processing, or about the quality of the algorithms' outcome and its ethical implications, to name a few, managers should be aware of two core elements where potential problems frequently occur in Machine Learning systems and which we feel executives should be concerned with and take action to remedy: 1) Transparency, and 2) Leadership and Governance.

For people to use the predictions of an analytics model in their decision-making, they must trust the model. To trust a model, they must understand how it makes its predictions, i.e., the model should be interpretable. Most current Machine Learning systems operating which are based on deep neural network principles are not easily interpretable.

This can potentially be very damaging for the organization that is relying on the AI system. Researchers Taylor et al. (2016) have shown that “there are many possible hard-to-detect ways a system’s behavior could differ from the intended behavior of the designer, and at least some of these differences are undesirable.”

Monitoring the behavior of a Machine Learning system may prove difficult without careful design. Executives should strive to ensure that their Machine Learning systems are more transparent, in order to aid an informed overseer by allowing them to evaluate a system’s internal reasons for decisions. As Professor Pedro Domingos writes: “when a new technology is as pervasive and game-changing as machine learning, it’s not wise to let it remain a black box. Opacity opens the door to error and misuse.”

Leaders should seek to enforce strict governance over Machine Learning algorithms ensuring a “value alignment” and “good behavior’’ in these new machine intelligence systems especially as they are frequently being utilized in a general capacity for making effective decisions toward a business objective.

Governance of the systems should incorporate systematic ways to formalize hidden assumptions (inside a black box) and ensure accountability, auditability, and transparency of internal Machine Learning system workings. Furthermore, a greater emphasis on introducing stricter checks on the selection and robustness of open source Machine Learning algorithms and training data should be uppermost in developers and management's mind.

Any decision-making Machine Learning system optimizing itself for an objective, which may be misaligned with an organization’s interests, could have significant and permanent effects. Recognizing the limitations of Machine Learning and AI algorithms is the first step to managing them better. 

Ultimately we need to be sure we are not putting machines in charge of decisions that they do not have the intelligence to make. 

Colin Lewis is a Behavioral Economist and Data Scientist who provides research and advisory services in automation, robotics and artificial intelligence (www.robotenomics.com). His work on robotics and automation has been featured by The Financial Times, Bloomberg, Harvard Business Review, and others.

Dagmar Monett is Professor of Computer Science at the Berlin School of Economics and Law, Germany. She received a Dr. rer nat. in Computer Science from the Humboldt University of Berlin in 2005. Her main research and teaching interests include different areas in Artificial Intelligence and Software Engineering (www.monettdiaz.com).

Images Powered by Shutterstock