Logo

The Data Daily

Responsible AI is Not an Option. Here’s How You Can Achieve it

Responsible AI is Not an Option. Here’s How You Can Achieve it

If you’re attending ODSC West you’re acutely aware of the fact that artificial intelligence (AI) istodaywidely used to inform and shape business strategies and services. You’re probably also aware that the decisions made by AI algorithms can appear callous and even careless. But it doesn’t have to be that way; responsible use of AI is within reach of every data science organization. I’m delighted to be presenting my talk “Responsible AI Is Not an Option” at ODSC West, sharing a perspective gained over more than two decades of delivering industry-focused machine learning/AI innovation in a highly regulated environment.

Given AI’s powerful impact on business and society,Responsible AI is a standard for accountability; it requires a company’s Board of Directors to approve and support the use of a company-wide standard technical model governance framework to ensure AI implementations aresafe,trustworthy, andunbiased.

Building Responsible AI involvesidentifying key business risks in adopting AI and proactively mitigating them. In this blog I’ll briefly describe a path toward achieving Responsible AI: 

Building an analytics team for AI projects means knowing how to hire the right people. There’s no template or magic formula for getting it right. It’s an iterative process, full of hard questions that must be asked early and often to produce a successful outcome.

First and foremost, the data analytics team should represent a diverse set of perspectives and experiences, and also be able to appropriately balance the company’s current level of analytics sophistication with its AI aspirations. From there, you can determine the right size and capabilities of the team based on organization-specific needs and objectives. It is extremely important to not overstretch the capability or capacity of the team; it’s far better to have a few successful Responsible AI projects than a larger number of projects that fail because your talent was spread too thin.

In an age of cloud services and open source, there are still no “fast and easy” shortcuts to proper model development. AI models that are produced with the proper data and scientific rigor are robust and capable of thriving in tough environments like the one we are experiencing now. Responsible AI requires a robust development methodology that includes:

Perhaps most importantly,all of the above factors must be adhered to by the entire data science organizationand the AI’s explainability should be non-negotiable.

A growing number of consumers want to be empowered with a constant stream of individualized information to help them make better financial decisions. With control of their own data imminent (as part of the Open Banking movement), we are seeing consumers increasingly provide consent for specific, prescribed and constrained uses of their transaction data. Banks’ ability to obtain and manage specific customer consents will directly impact institutions’ ability to create that “next layer” of transformational data.

AI and machine learning (ML) technologies will be critical in delivering personalized experiences. But to be truly transformative, new data-driven features must be highly accurate, safe, unbiased, and offer personalized insights. Those that don’t will get a lukewarm consumer reception at best, weakening the trust and future data access.

Model explainability is crucial. In fact, I have a belief that’s unorthodox in the data science world:explainability first, predictive power second– a notion that is more important than ever. AI that is explainable should make it easy for humans to find the answers to important questions, including:

The latter question illustrates the related concept ofHumble AI. Here, data scientists determine the suitability of a model’s performance in different situations, or situations in which it won’t work because of a low density of historical data. Either of these factors make the model unsafe or unethical to use for similar customers.

One of the most common misperceptions I hear about bias is, “If I don’t use age, gender or race, or similar factors in my model, it’s not biased.” Unfortunately, that’s not true.

ML learns relationships between data to fit a particular objective function or goal. It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose what the underlying ML model has learned and if it could impute bias.

Ethical models must be tested and any discrimination removed. Interpretable ML architectures allow extraction of the non-linear relationships that typically hide in the inner workings of most ML models. These non-linear relationships need to be identified and separately tested, as they are learned automatically as part of the machine learning training and based on training data that is all-too-often implicitly full of societal biases.

Efficient AI simply means:building it right the first time.To be efficient, ML models have to be built according to a company-wide development standard that mandates the use of:

AI models are extremely complex, so it is important to use standard and approved technology and IP components. Production models are not a research playground; using new technology requires it to go through extensive research cycles and approval according to formal model standards for the organization. This process would encompass the approval of code, IP, and algorithm usage as an exhaustive study demonstrates its safe use.

Boards of Directors must understand and enforce AI governance based on 4 classic tenets of corporate governance: accountability, fairness, transparency, and responsibility. 

The way to succeed with AI is by evangelizing Responsible AI throughout your organization. The AI evangelists on your team perform an important role here. These scientists can simplify and communicate complex data science solutions expertly for each audience – whether it’s hardcore analytic skeptics, internal stakeholders, customers, or partners. Without these customer-facing experts, machine learning and AI become science fiction concepts or, worse yet, clumsily applied, which limits adoption and consequent recognition of the technology’s benefits.

That’s it for my preview. See you at ODSC West, November 1-3 in San Francisco!

Scott Zoldi is Chief Analytics Officer at FICO responsible for the analytic development of FICO’s product and technology solutions. While at FICO Scott has been responsible for authoring more than 120 analytic patents, with 76 granted and 47 pending. He received his Ph.D. in theoretical and computational physics from Duke University.

Images Powered by Shutterstock