Logo

The Data Daily

Scientists invented an AI to detect racist people

Scientists invented an AI to detect racist people

A team of researchers at the University of Virginia have developed an AI system that attempts to detect and quantify the physiological signs associated with racial bias. In other words, they’re building a wearable device that tries to identify when you’re having racist thoughts.

Up front: Nope. Machines can’t tell if a person is a racist. They also can’t tell if something someone has said or done isracist. And they certainly can’t determine if you’re thinking racist thoughts just by taking your pulse or measuring your O2 saturation levels with an Apple Watch-style device.

That being said, this is fascinating research that could pave the way to a greater understanding of how unconscious bias and systemic racism fit together.

How does it work?

The current standard for identifying implicit racial bias uses something called the Implicit Association Test. Basically, you look at a series of images and words and try to associate them with “light skin,” “dark skin,” “good,” and “bad” as quickly as possible. You can try it yourself here on Harvard’s website.

There’s also research indicating that learned threat responses to outsiders can often be measured physiologically. In other words, some people have a physical response to people who look different than them and we can measure it when they do.

The UVA team combined these two ideas. They took a group of 76 volunteer students and had them take the Implicit Association Test while measuring their physiological responses with a wearable device.

Finally, the meat of the study involved developing a machine learning system to evaluate the data and make inferences. Can identifying a specific combination of physiological responses really tell us if someone is, for lack of a better way to put it, experiencinginvoluntary feelings of racism?

According to the team’s research paper:

But that’s not necessarily the bottom line. 76% accuracy is a low threshold for success in any machine learning endeavor. And flashing images of different colored cartoon faces isn’t a 1:1 analogy for experiencing interactions with different races of people.

Quick take:Any ideas the general public might have over some kind of wand-style gadget for detecting racists should be dismissed outright. The UVA team’s important work has nothing to do with developing a wearable that pings you every time you or someone around you experiences their own implicit biases. It’s more about understanding the link between mental associations of dark skin color to badness and the accompanying physiological manifestations.

In that respect, this novel research has the potential to help illuminate the subconscious thought processes behind, for example, radicalization and paranoia. It also has the potential to finally demonstrate how racism can be the result of unintended implicit bias from people who may even believe themselves to be allies.

You don’t have tofeellike you’re being racist to actually be racist, and this system could help researchers better understand and explain these concepts.

But it absolutely doesn’t actuallydetect bias; it predicts it, and that’s different. And it certainly can’t tell if someone’sa racist. It shines a light on some of the physiological effects associated with implicit bias, much like a diagnostician would initially interpret a cough and a fever as beingassociatedwith certain diseases while still requiring further testing to confirm a diagnosis. This AI doesn’t label racism or bias, it just points to some of the associated side effects.

You can check out the whole pre-print paper here on arXiv.

Images Powered by Shutterstock