Logo

The Data Daily

Bias in AI and Machine Learning: Sources and Solutions | 7wData

Bias in AI and Machine Learning: Sources and Solutions | 7wData

“Bias in AI” has long been a critical area of research and concern in machine learning circles and has grown in awareness among general consumer audiences over the past couple of years as knowledge of AI has grown. It’s a term that describes situations where ML-based data analytics systems show Bias against certain groups of people. These biases usually reflect widespread societal biases about race, gender, biological sex, age, and culture.

There are two types of bias in AI. One is algorithmic AI bias or “data bias,” where algorithms are trained using biased data. The other kind of bias in AI is societal AI bias. That’s where our assumptions and norms as a society cause us to have blind spots or certain expectations in our thinking. Societal bias significantly influences algorithmic AI bias, but we see things come full circle with the latter’s growth.

We often hear the argument that computers are impartial. Unfortunately, that’s not the case. Upbringing, experiences, and culture shape people, and they internalize certain assumptions about the world around them accordingly. AI is the same. It doesn’t exist in a vacuum but is built out of algorithms devised and tweaked by those same people – and it tends to “think” the way it’s been taught.

Take the PortraitAI art generator. You feed in a selfie, and the AI draws upon its understandings of Baroque and Renaissance portraits to render you in the manner of the masters. The results are great – if you’re white. The catch is that most well-known paintings of this era were of white Europeans, resulting in a database of primarily white people and an algorithm that draws on that same database when painting your picture. BIPOC people using the app had less than stellar results.

(PortraitAI acknowledges the problem, saying: “Currently, the AI portrait generator has been trained mostly on portraits of people of European ethnicity. We’re planning to expand our dataset and fix this in the future. At the time of conceptualizing this AI, authors were not certain it would turn out to work at all. This generator is close to the state-of-the-art in AI at the moment. Sorry for the bias in the meanwhile. Have fun!”)

It’s not just scrappy startups that are putting racially biased algorithms out into the world. Twitter recently apologized after users called out its image-cropping algorithm for being racist. When you upload an image or link to Twitter, the algorithms theoretically crops the image centering on a human face in the preview image.

It turns out that’s only the case if you’re white (and more so if you’re male). If your image contained a black person and a white person, the white person would be the one centered in the preview. Users tested various photos and image collages to see what the algorithm identified as a “face,” and time after time, the white face was foregrounded. Even animal and cartoon faces received preferential treatment over Black faces.

(Some extra painful context on this one? The whole discussion started after a white user tweeted about how his Black colleague was erased from a Zoom background. When he posted a screengrab to show the issue, the Twitter photo cropping bias showed its hand.)

Societal AI bias occurs when an AI behaves in ways that reflect social intolerance or institutional discrimination. At first glance, the algorithms and data themselves may appear unbiased, but their output reinforces societal biases.

Images Powered by Shutterstock