Logo

The Data Daily

Breaking The AI Bias: How To Define Fairness To Deliver Fairer Models

Breaking The AI Bias: How To Define Fairness To Deliver Fairer Models

At a basic level, AI learns from our history. Unfortunately, much of societal history includes some discrimination and inequality. It’s therefore essential that data practitioners consider this in their work as AI built without acknowledgement of bias will replicate and even exacerbate this discrimination. This is particularly concerning when you consider the influence AI is already exerting over our lives.

McKinsey’s recent digital trust survey found that less than a quarter of executives are actively mitigating against risks posed by AI models (this includes fairness and bias). Executives also reported incidents where AI produced outputs that were biased, incorrect, or did not reflect the organisation’s values. Advanced industries including aerospace, advanced electronics, automotive and assembly, and semiconductors were particularly affected by such issues — respondents from this sector reported both AI incidents and data breaches more than any other sector.

Our digital trust survey also found that consumers expect protection from such issues and that those organisations that do prioritise trust benefit financially. The research revealed leaders in digital trust are more likely to see revenue and EBIT growth of at least 10 percent annually.

Mitigating bias through model development is only one part of dealing with fairness in AI. The process should involve stakeholders from all areas of the organisation, including legal experts and business leaders. From hiring to loan underwriting, fairness needs to be considered from all angles. Data practitioners have an opportunity to make a significant contribution to breaking the bias by mitigating discrimination risks during model development.

This series will outline the steps that practitioners can take to increase model fairness throughout each phase of the development process. We will start by discussing how practitioners can lay the groundwork for success by defining fairness and implementing bias detection at a project’s outset. We hope these articles offer useful guidance in helping you deliver fairer project outcomes.

A key step in approaching fairness is understanding how to detect bias in your data. Defining fairness at the start of the project’s outset and assessing the metrics used as part of that definition will allow data practitioners to gauge whether the model’s outcomes are fair. This is a vital step to take at the start of any model development process, as each project’s ‘definition’ will likely be different depending on the problem the eventual model is seeking to address.

It’s also crucial from the outset to define the groups your model should control for — this should include all relevant sensitive features, including geography, jurisdiction, race, gender, sexuality. For instance, males have historically studied STEM subjects more frequently than females so if using education as a covariate, you would need to consider how discrimination by your model could be measured and mitigated.

Next, we need to consider two principles of fairness assessment. The first is individual fairness which appreciates that similar people should be treated similarly. The second is group fairness, which opposes any differences in treatment between members of one group and the broader population.

Let us consider some of the metrics used that detect already existing bias concerning ‘protected groups’ (a historically disadvantaged group or demographic) in the data. We assume that the outcome of interest is binary, although most of the following metrics can be extended to multi-class and regression problems.

● Elift — the ratio of positive historical outcomes for the protected group over the full population. The closer the ratio is to 1, the less bias has been detected.

● Impact ratio — the ratio of positive historical outcomes for the protected group over the general group. This is used in US courts, where the decisions are deemed to be discriminatory if the ratio of positive outcomes for the protected group is below 0.8 of that of the general group.

● Mean difference — measures the absolute difference of the mean historical outcome values between the protected and general group. This can be used in regression problems as well as classification problems.

● Situation testing — a systematic research procedure whereby pairs of individuals who belong to different demographics but are otherwise similar are assessed by model-based outcome. These model outcomes are then compared to check for inherent discrimination in the decision-making process. While situation testing focuses on assessing the outcomes of a model, its results can be helpful in revealing biases in the starting data.

It’s also important to choose which model assessment metric to use, these will measure how fair your algorithm is by comparing historical outcomes and to model predictions. There are many, but popular options include ‘demographic parity’ — where the probability of a positive model prediction is independent of the group — or ‘equal opportunity’ — where the true positive rate is similar for different groups. There also exists a set of AUC based metrics, which can be more suitable in classification tasks, as they are agnostic to the set classification thresholds and can give a more nuanced view of the different types of bias present in the data — and in turn making them useful for intersectionality.

These fairness definitions are often conflicting, and which one to use should be decided based on the problem at hand. For instance, it would not be desirable for a medical diagnostic tool to achieve demographic parity — as there are diseases which affect one sex more than the other. On the other hand, equal opportunity may be a suitable requirement, as it would imply the model’s chances of correctly labelling risk being consistent across all groups.

It is extremely important that algorithmic fairness is not treated as an afterthought but considered at every stage of the modelling lifecycle. As data practitioners we’re in a fortunate position to break the bias by bringing AI fairness issues to light and working towards solving them. Ultimately, we cannot solve systemic discrimination or bias but we can mitigate the impact of it with carefully designed models.

It’s also worth noting that AI, like most technology, is often reflective of its creators. If a certain demographic is under-represented in building AI, it’s more likely that it will be poorly served by it. With this technology only becoming increasingly ubiquitous the need for diverse data teams is paramount. The models governing how our society functions in the future will need to be designed by groups which adequately reflect modern culture — or our society will suffer the consequences.

The next article in the series will discuss how you can start building out your approach to fairness for your specific use case by starting at the problem definition and dataset selection. Keep an eye on our social channels for when this is released.

Images Powered by Shutterstock