The Data Daily

Mitigating the Risks of the AI Black Box

Last updated: 03-21-2020

Read original article here

Mitigating the Risks of the AI Black Box

Enterprises are placing their highest hopes on machine learning. However machine learning, which sits at the heart of AI (artificial intelligence), is also starting to unnerve many enterprise legal and security professionals.

One of the biggest concerns around AI is that complex ML-based models often operate as “black boxes.” This means the models—especially “deep learning” models composed of artificial neural networks—may be so complex and arcane that they obscure how they actually drive automated inferencing. Just as worrisome, ML-based applications may inadvertently obfuscate responsibility for any biases and other adverse consequences that their automated decisions may produce.

To mitigate these risks, people are starting to demand greater transparency into how machine learning operates in practice and throughout the entire workflow in which models are built, trained, and deployed. Innovative frameworks for algorithmic transparency—also known as explainability, interpretability, or accountability—are gaining adoption among working data scientists. Chief among these frameworks are LIME, Shapley, DeepLIFT, Skater, AI Explainability 360, What-If Tool, Activation Atlases, InterpretML, and Rulex Explainable AI.

All these tools and techniques help data scientists generate “post-hoc explanations” of which particular data inputs drove which particular algorithmic inferences under various circumstances. However, as noted here, recent research shows that these frameworks can be hacked, thereby reducing trust in the explanations they generate and exposing enterprises to the following risks:

To mitigate the technical risks of algorithmic transparency, enterprise data professionals should explore the following strategies:

In addition to these risks of a technical nature, enterprises that disclose fully how their machine learning models were built and trained may expose themselves to more lawsuits and regulatory scrutiny. Without sacrificing machine learning transparency, mitigating these broader business risks will require a data science devops practice under which post-hoc algorithmic explanations are automatically generated.

Just as important, enterprises will need to continually monitor these explanations for anomalies, such as evidence that they, or the models which they purportedly describe, have been hacked. This is a critical concern, because trust in the entire AI edifice will come tumbling down if the enterprises that build and train machine learning models can’t vouch for the transparency of the models’ official documentation.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

The original version of this article was first published on InfoWorld.

Read the rest of this article here