Logo

The Data Daily

People are scared of artificial intelligence for all the wrong reasons

People are scared of artificial intelligence for all the wrong reasons

People in Britain are more scared of the artificial intelligence embedded in household devices and self-driving cars than in systems used for predictive policing or diagnosing diseases. That’s according to a survey commissioned by the Royal Society, which is billed as the first in-depth look at how the public perceives the risks and benefits associated with machine learning, a key AI technique.

Participants in the survey were most worried by the notion that a robot, acting on conclusions derived by machine learning, would cause them physical harm. Accordingly, machines with close proximity to their users, such as those in the home and self-driving cars, were viewed as very risky. The notion of a robot animated by AI is known as “embodiment.” “The applications that involved embodiment… tended to appear as having more risk associated with them, due to concerns about physical harm,” the authors write.

Yet, the risks posed by sprawling machine-learning systems are real—and they aren’t just about being run over by a self-driving car gone rogue. As the data scientist Cathy O’Neil has written, algorithms are dangerous if they possess scale, their workings are kept secret, and have destructive effects. Predictive policing is one example she offers of dangerous algorithms at work, its pernicious effects compounded by biased data sources.

Another area with potentially far-reaching implications is machine learning in healthcare. Cornell researcher Julia Powles pointed out that survey participants were given a particularly striking example of machine learning in breast cancer diagnosis, a 2011 Stanford study, to illustrate the technology at work. As a result of that example, participants reported they were “confident that misdiagnosis would not occur” to such a degree that it would put society at risk. “This is as yet unproven for the general scenario of [computer-driven diagnoses],” Powles said.

This mismatch between perceived and potential risk is common with new technologies, said Alison Powell, an assistant professor at the London School of Economics who is studying the ethics of connected devices. “This is part of the overall problem of the communication of technological promise: new technologies are so often positioned as ‘personal’ that perception of systemic risk is impeded,” she said.

The Royal Society doesn’t have a quick fix. It recommends machine-learning students get a class in ethics to go along with their technical studies. It suggests that the UK government funds public engagement with researchers. It vetoes the idea of regulation specifically for machine learning, in favor of oversight by existing regulations for each industry. What’s beyond doubt is that machine learning is already all around us, and will grow in influence. Peter Donnelly, who chaired the group that put the report together, told journalists assembled for the launch of the research: “It can and probably will impact on many, many, areas of our personal and leisure activities.”

Images Powered by Shutterstock