Logo

The Data Daily

Giving AI a Moral Compass | SAP Insights

Giving AI a Moral Compass | SAP Insights

Artificial intelligence and machine learning technologies allow organizations to analyze and act on vast amounts of information quickly, but they also automate risk at the same speed and scale. A company that uses AI without carefully considering bias, privacy, and related issues can wrong large numbers of people. And when consumers, employees, and investors learn about these lapses, they don’t care that the resulting harm was unintentional.  

In his book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, ethicist and former philosophy professor Reid Blackman offers a practical, step-by-step plan to developing, procuring, and deploying AI in a way that avoids, or at least mitigates, its ethical risks.  

Blackman is the CEO and founder of digital ethical risk consultancy Virtue. We asked him to discuss what business leaders should be thinking about as they define the ethical boundaries for their AI systems and create the processes and quantitative metrics to support and enforce those boundaries.

Blackman: It’s becoming clear that AI exposes companies to serious risks – ethical and reputational risks trumpeted on the news and social media, as well as regulatory and legal risks. These are all bottom-line concerns. 

For example, in 2019, regulators investigated Optum, a health services company, for an AI that was allegedly recommending that doctors and nurses pay more attention to White patients than to sicker Black ones. Relatedly, in September 2022, California’s Attorney General asked hospitals in that state to provide an inventory of all their algorithm-based risk ratings of diseases and diagnoses. 

Business leaders, who realize the risks of harming great swaths of people are being proactive about risks in their own organizations, even if they aren’t sure what those are. They don’t want to wrong people at scale! When you engage in unethical behavior through your use of AI, it never harms just one person. That, combined with regulations likely coming soon from the EU and Canada, is really moving leaders to take the issues seriously. 

Blackman: The CEO and members of the board are responsible for the brand, so they should advocate for the program. But the chief data officer or chief analytics officer should own it because they’re responsible for overseeing the production and development of machine learning algorithms, and the relevant roles responsible for it, like data scientists. 

That said, the question of ethical AI isn’t a technology problem. There are quantitative techniques to help identify and mitigate risks, but they aren’t sufficient to solve a problem that’s actually about overall governance and who makes qualitative judgments about what. For example, AI that helps HR monitor employee e-mail can “read” every outbound message, evaluate the language for qualities like respectfulness and aggression, and flag e-mails that it scores as inappropriate. But it’s still HR’s responsibility to read the flagged e-mails in order to understand the context and decide whether it justifies taking action against an individual.  

The AI ethical risk program needs to include people in risk management, compliance, and cybersecurity, too, as well as someone from the general counsel’s office, and – in my strong opinion – an ethicist. That’s because an effective AI ethical risk program isn’t siloed; it’s woven through existing governance, policies, procedures, and so on. 

Blackman: There are lots of uses of AI that are legal but unethical. For instance, manipulative recommendation engines that suggest problematic content, like disinformation about election results. 

As for something being ethical but not legal, say you’re a bank. You’re subject to anti-discrimination laws that prohibit you from taking protected categories like race, ethnicity, or gender into your decisions about who gets a mortgage. As you try to build an AI to automate those decisions, you have to look at your model to see who gets mortgages across various populations and determine whether that distribution is fair. You realize that if you want to adjust your model for scoring applicants to approve mortgages more fairly, you have to take those protected categories into account, which the law says you can’t do. 

That may indicate that anti-discrimination law is outdated, but until it’s updated, it’s still the law, and your business has to comply with it. If you try to make your AI more fair, you increase your risk of being investigated, but if you don’t try to make it more fair, you increase your risk of inadvertently discriminating, and being investigated for different reasons. Companies need to establish a framework for thinking through these challenges. 

Blackman:It’s a balancing act with no easy answers; but also, it’s often less an AI issue than a straightforward business ethics issue. A company that needs goods manufactured has to decide whether it is willing to work with a supplier that’s allowed to run sweatshops, even if the AI points to that supplier as meeting all the production criteria.  

You can’t expect data scientists to wrestle with complex ethical decisions while they wrangle complex data. They don’t have training, experience, or knowledge about ethical impact. Instead, you need to involve other people – like your general counsel, a risk manager, or a civil rights expert – to train the people who are developing and using your AI to smell the ethical smoke and know who to alert about it.  

One company I know of established an “office of responsible AI” where teams developing and implementing AI can ask for help with ethically complex or questionable problems. Another company has one person who serves as an “ethical champion” and can elevate questions to a committee as needed. 

Blackman: Data scientists can’t math their way out of these problems. Every organization needs a cross functional team to push things forward. And since you can’t address a problem when you don’t understand it, it’s good to start the ball rolling with a seminar or workshop that gives everyone on it a deep and shared understanding of the ethical risks of AI.  

From there, I recommend two steps. First, develop an AI ethics statement that actually does something. A lot of the time, companies produce such high-level statements that they can’t possibly guide anything – saying “we’re in favor of fairness” doesn’t explain what “fairness” looks like. So you need a statement articulating what your company’s values are and what would constitute a violation of those values.  

Second, once those standards are clear, it’s time to do a feasibility analysis. What does your organization look like now in relation to those standards? What existing governance structures, policies, and people can be leveraged to create an effective AI ethical risk program that meets your standards? What obstacles does your organization face in attempting to harmonize existing policies and practices with new ones to push your AI ethical risk efforts forward?  

The answers to these questions and more are imperative if you’re going to efficiently and effectively create an AI ethical risk program that protects your brand and the people it affects. 

Images Powered by Shutterstock