Logo

The Data Daily

AI: 3 top risks and how to avoid them

AI: 3 top risks and how to avoid them

Artificial intelligence systems and projects are becoming more commonplace as enterprises harness the power of this emerging technology to automate decision-making and improve efficiency.

While its benefits are clear, AI can also introduce risk. How should you prepare if you are leading a large-scale AI project? Here are three of the most significant risks associated with AI and how to mitigate them.

Worries about privacy go far beyond our clicking habits. Facial recognition AI is rapidly progressing in some ways that raise ethical concerns about privacy and surveillance. For example, this technology could allow companies to track users' movements, or even emotions, without their consent. The White House recently proposed an “Artificial Intelligence Bill of Rights” to curb the technologies from causing real harms that contradict our core democratic values, including the fundamental right to privacy.

IT leaders need to let users know what data is being collected and obtain consent to do so. Beyond this, proper training and implementation of data sets are essential to prevent data leaks and potential security breaches.

Test the AI system to ensure it achieves its objectives without enabling unintended effects, such as allowing hackers to use fake biometric data to access sensitive information. Implement human oversight of your AI system, enabling you to stop or override its actions when necessary.

Many systems that use machine learning are opaque, meaning it’s unclear exactly how they make decisions. The most extensive study of mortgage data, for example, shows that predictive AI tools used to approve or reject loans are less accurate for minority applicants. The opacity of the technology violates the “right to explanation” for applicants denied loans.

When your AI/ML tool makes a significant decision about your users, make sure they are notified and can get a full explanation of why the decision was made.

[ Want best practices for AI workloads? Get the Ebook: Top considerations for building a production-ready AI/ML environment ]

Your AI team should also be able to trace the key factors leading to each decision and diagnose any errors along the way. Internal employee-facing and external customer-facing documentation should explain how and why your AI systems function as they do.

A recent study shows that AI systems trained on biased data reinforce patterns of discrimination, from inadequate recruitment in medical research to less engagement with scientists and even the perception that minority patients are less interested in participating in research.

As an initial pulse check, ask yourself: If an unintended outcome were to occur, who or which group would it impact? Would it affect all users equally or just a particular group?

Take a close look at your historical data to evaluate whether any potential bias has been introduced and/or mitigated. An often overlooked factor is your development team’s diversity: more diverse teams usually introduce more equitable processes and results.

To avoid unintended harm, ensure all the stakeholders from your AI/ML development, product, audit, and governance team fully understand the high-level principles, values, and control plans that guide your AI projects. Obtain an independent evaluation to confirm that all projects align with those principles and values.

Which of these risks have you encountered? What steps and measures have you taken to prepare for them? We’d love to hear your thoughts.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]

Images Powered by Shutterstock