Logo

The Data Daily

Weaving fairness, transparency and ethics into AI

Weaving fairness, transparency and ethics into AI

In recent years, many in the business world have seen a rise in the use of artificial intelligence (AI), from chatbots to image recognition and financial fraud detection.

Gartner has predicted that AI software will reach $62 billion in 2022 alone, an increase of 21.3% from 2021. And there are huge potentials for AI in the UK, with predictions that it could deliver a 22% boost to the UK economy by 2030. With such an increase in the use of AI, it was only a matter of time before the regulation was tightened (and rightly so) to ensure businesses and consumers remained protected.

Recently, the UK Government shared a new rulebook for AI innovation to boost public trust in the technology. Some of the core principles of the rulebook require developers and users to ensure that AI is fair, secure, transparent, explainable, used safely and more.

With the proposed new rulebook from the Government, how can businesses consider and weave in factors like fairness and transparency when implementing AI technology?

The importance of transparency and fairness

Many of the principles shared by the Government are not just important for AI; these values are part of a larger umbrella of digital ethics. Digital ethics is crucial in creating a better digital world that is fair and inclusive to all.

From the lens of customers and everyday people interacting with AI, they want to be assured that technology like AI will not discriminate against them. They also want to be assured that AI technology is safe to use, so transparency, fairness and other principles must be communicated effectively. Organisations should consider sharing how their AI works, what data is used, and other relevant information when thinking about transparency.

It is helpful for people when organisations can provide explanations for how the AI affects them personally. For example, suppose a financial services institution uses AI to predict loan approvals. In that case, customers should be able to know why they have been accepted or rejected based on the information they have shared. By providing explanations about how AI works, and the reasoning behind decisions, organisations are less likely to include unconscious bias in the first place or, if that ship has already sailed, more likely to identify bias in their AI processes of today.

When organisations provide information and explanation, people are reassured that decisions are fair and reliable.

Navigating issues around transparency and fairness can be easier said than done. Taking fairness as an example, how can it be defined? How can businesses choose between what is fair for the customer and what is fair for the business? For example, in the eyes of a customer applying to a bank for a loan, the fair decision would be to approve the loan for the requested amount. The fair thing for the business would be to decline a customer that may be a risky investment.

When it comes to transparency, it is important to share details of how AI works. However, some organisations worry that it might reveal their processes to competitors or, worse, those looking to scam, defraud customers and game the system.

With concerns like those above, organisations must walk the fine line between the right thing for the customer and the right thing for the business.

Values and responsibilities of organisations using AI software

In order to find the right balance and address some of the challenges, organisations working with AI need to understand their values and responsibilities.

Arguably, the organisation’s values are one of the most important factors that determines how they use AI technology. If a business values ethics, diversity, and sustainability, then it is more likely that the structure, processes, and technology used will work to support those values.

For example, at Sopra Steria, our mission is to use technology for good to shape a better world. As a result, we developed a set of principles to guide our use of AI before the UK’s rulebook was introduced. Our six principles include respect, equity, transparency, loyalty, mastery and security. We have used these principles to guide our use of AI technology to ensure that clients and customers are treated fairly and ethically.

Values cannot be empty words. Instead, they must be actioned properly. When using AI, organisations are responsible for building AI free from biases, and this isn’t done once. AI must be monitored consistently to ensure it does not develop new biases or self-teach.

Organisations are also responsible for monitoring changes in regulation, policy, and cultural attitudes if AI is to stay relevant and effective for the task it is designed to do. For example, HSBC has announced that it will stop collecting data on customer’s gender. If this becomes commonplace, financial AI will need to be adapted to ensure that the removal of data won’t skew patterns and put people at a disadvantage.

Organisations are finding new and innovative ways to use AI to serve their customers and find more efficient ways of working. When implementing AI technology, businesses have a responsibility to weave in principles like transparency and safety, so there are fewer biases leading to discrimination.

Overarching values like ethics, diversity and sustainability must also be baked into the organisation’s core. Only then will AI be implemented in ways that promote a fair and safe society for all.

Andy Whitehurst is Chief Technology Officer at Sopra Steria UK. Sopra Steria, a European Tech leader recognised for its consulting, digital services and software development, helps its clients drive their digital transformation to obtain tangible and sustainable benefits. It provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach.

Images Powered by Shutterstock