Logo

The Data Daily

The Ethical Threat of Artificial Intelligence in Practice

The Ethical Threat of Artificial Intelligence in Practice

How do clinicians set rules that allow professionals "to make good use of technology to find patterns in complex data" but also "stop companies from extracting unethical value from those data?" asked Raymond Geis, MD.

Geis, from the American College of Radiology (ACR) Data Science Institute, is one of the authors of a joint statement that addresses the potential for the unethical use of data, the bias inherent in datasets, and the limits of algorithmic learning, and was the moderator of a session on the topic at the Radiological Society of North America (RSNA) 2019 Annual Meeting in Chicago.

There's a very big grey area between an absolute ethical approach to data use and decisions that are profit-driven, he told Medscape Medical News.

"Sitting on the sainthood side, I can stick to doing only what I see as good for my patients, maybe even taking vows of poverty," he said. "On the extreme other side, I'm doing things that put me in prison." The area in between is "the Goldilocks zone."

For example, "if I build an algorithm to show me which patients are not going to show up for appointments, I could take that information and send them an Uber or some other transportation to make sure they get to the hospital for their appointments. That would be on the ethical side. Everyone would say that's the right thing to do," he explained.

"I could also double-book that slot," he pointed out, but if "both people show up, somebody will have to wait." One could argue that "I'm being potentially more efficient and I don't have to penalize the patient who did not show up," said Geis. Perhaps I could even "charge less because I'm being more efficient with my machine."

But "when patients are vulnerable, it all comes back to trust," he said.

"Radiology's goal should be to derive as much value as possible from the ethical use of AI, yet resist the lure of extra monetary gain from unethical uses of radiology data and AI," the statement authors write.

Everyone — companies, researchers, and governments — are "starved" for the data that algorithms depend on, said Geis.

The problem is that datasets from academic institutions, where data are often collected, are generally very homogeneous.

"Conclusions from research using those data may not reflect results that would be extracted from the general population," he explained. "We see tremendous potential for bias," which is why more datasets are better.

But for patients, privacy is a big issue. A patient can request that their data remain private and won't be used to train new algorithms, but this doesn't help things in the long run.

"Diverse data are needed to train the algorithm to have relevant results," said Geis. What's more, for machine learning to be truly valuable, a whole lot of data, usually from several sources, is needed, and that means that data need to be "released out into the wild."

"And there's no data scientist willing to guarantee that it's safe once it's out there," he added.

Bias in the data can have damaging consequences, said Neil Tenenholtz, PhD, also from the ACR Data Science Institute.

What algorithms today are really good at doing is predicting trends, he explained. "But if you input data that are biased, your output will also inherently be biased."

Evidence of this was shown in a report — published in Science — that demonstrated racial bias in a public healthcare algorithm used to measure levels of risk in the population. Because the algorithm identified that less money was spent on black patients than on white patients with the same level of need, it falsely concluded that black patients were not as sick as the white patients.

"Reformulating the algorithm so that it no longer uses cost as a proxy for need eliminates the racial bias in predicting who needs extra care," the authors conclude.

"It's easy to develop a poor metric and make unintentional errors." said Tenenholz.

The questions might seem simple — Are your metrics fair? Are your data diverse? Are the technologies used in all the data the same? — but they can have serious ethical implications, he explained.

As companies that use machine-learning technology push to innovate, software developers are under pressure to write algorithms quickly. "They are faced with deadlines and economic pressures," Tenenholz explained. "It's critical that they know that cutting corners is not an option. By the time errors are caught, there could be repercussions."

"There's not really any one solution to this," but different specialists working together to refine algorithms and machine learning would help, he suggested.

"We need statisticians, machine-learning scientists, and physicians to offer different perspectives on the same problem," he explained. Someone familiar with the clinical workflow, for example, would know that a certain type of scan is only acquired in a certain scenario, and its use elsewhere could introduce bias into the results.

"This is critical," he said. "If we are leveraging single-hospital data, they are only valid for that hospital. If you can get multiple hospitals to collaborate, you're much more likely to ensure appropriate representation"

So now that we have a machine that is autonomously intelligent, Geis asked, "how can we be sure it's doing the right thing?"

Algorithms only understand what we teach them, he explained. "It won't tell you something's not right; it doesn't tell you, 'I don't know'."

For example, "it won't tell you that an x-ray is actually not of a human but of a dog," he said. Although this is an extreme example of the limitations of AI, the nuances are very real.

In radiology, there are a lot of diagnoses that are incredibly rare — "things you only see once a year but you don't want to miss," he said. "As a radiologist, if you see a specific combination, you need to think of that rare thing." But the way algorithms are built today, they are not going to tell you "this is disease x," he said. They will tell you if there's cancer or no cancer.

"All of these algorithms need an "I don't know" category, and we're not there yet, he said. We are already using a lot of these tools, but until we solve some issues, "patients are going to get hurt."

The statement on the ethics of AI — a joint effort of the ACR, the European Society of Radiology, the RSNA, the Society for Imaging Informatics in Medicine, the European Society of Medical Imaging Informatics, the Canadian Association of Radiologists, and the American Association of Physicists in Medicine — "was aspirational," said Geis.

It was developed to alert radiologists to ethical questions related to AI. "As soon as you talk about codes of conduct with all the societies, it becomes a slow process. We consciously did not do that," he explained.

Algorithms are "incredibly easy to build," he said. "When you're moving really quickly, you build something that works, you put it out there, and everybody starts using it."

"But for it to be reliable and robust enough to reuse, it would be better to make sure it's really working well," said Geis. "This is currently a big problem for radiology AI."

The next steps will be to develop codes of conduct and to discuss regulation. Right now, "we don't have the appropriate guardrails defined," he added.

Images Powered by Shutterstock