Logo

The Data Daily

AI Has Resulted In “Ethical Issues” For 90% Of Businesses

AI Has Resulted In “Ethical Issues” For 90% Of Businesses

A new report from Capgemini has revealed that 90% of organizations are aware of at least one instance where an AI system had resulted in ethical issues for their business.

The report, titled “AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust” has found that while digital and AI-enabled interactions with customers are on the rise as customers seek contactless or non-touch interfaces amid the COVID-19 pandemic, systems are still being designed without due concern for ethical issues.

While two-thirds (68%) of consumers expect AI models to be fair and free of bias, Capgemini’s findings show that only 53% of organizations have a leader who is responsible for ethics of AI systems, such as a Chief Ethics Officer, and just 46% of have the ethical implications of their AI systems independently audited.

What’s more, 60% of organizations have attracted legal scrutiny and 22% have faced customer backlash because of these decisions reached by AI systems.

The lacking implementation of ethical AI comes in the face of increased regulatory scrutiny. The European Commission has issued guidelines on the key ethical principles that should be used for designing AI applications, and the US Federal Trade Commission (FTC) in early 2020 called for “transparent AI”. It stated that when an AI-enabled system makes an adverse decision, such as declining an application for a credit card, then the organization should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.

However, while globally 73% of organizations informed users about the ways in which AI decisions might affect them in 2019, this has dropped to 59%.

Anne-Laure Thieullent, Artificial Intelligence and Analytics Group Offer Leader at Capgemini, comments: “Given its potential, the ethical use of AI should of course ensure no harm to humans, and full human responsibility and accountability for when things go wrong. But beyond that there is a very real opportunity for a proactive pursuit of environmental good and social welfare,” comments Anne-Laure Thieullent, Artificial Intelligence and Analytics Group Offer Leader at Capgemini.

“Instead of fearing the impacts of AI on humans and society, it is absolutely possible to direct AI towards actively fighting bias against minorities, even correcting human bias existing in our societies today.”

The report is highlights seven key actions organisations should follow in order to built an ethically robust AI system: clearly outline the intended purpose of AI systems and assess its overall potential impact; proactively deploy AI for the benefit of society and environment; embed diversity and inclusion principles proactively throughout the lifecycle of AI systems; enhance transparency with the help of technology tools; humanize the AI experience and ensure human oversight of AI systems; ensure technological robustness of AI systems; and rotect people’s individual privacy by empowering them and putting them in charge of AI interactions.

Images Powered by Shutterstock