Logo

The Data Daily

A.I. Took a Test to Detect Lung Cancer. It Got an A.

A.I. Took a Test to Detect Lung Cancer. It Got an A.

Health |A.I. Took a Test to Detect Lung Cancer. It Got an A.
Advertisement
Supported by
A.I. Took a Test to Detect Lung Cancer. It Got an A.
Artificial intelligence may help doctors make more accurate readings of CT scans used to screen for lung cancer.
Image
A colored CT scan showing a tumor in the lung. Artificial intelligence was just as good, and sometimes better, than doctors in diagnosing lung tumors in CT scans, a  new study indicates.CreditCreditVoisin/Science Source
By Denise Grady
May 20, 2019
Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, in a study by researchers from Google and several medical centers.
The technology is a work in progress, not ready for widespread use, but the new report , published Monday in the journal Nature Medicine, offers a glimpse of the future of artificial intelligence in medicine.
One of the most promising areas is recognizing patterns and interpreting images — the same skills that humans use to read microscope slides, X-rays, M.R.I.s and other medical scans.
By feeding huge amounts of data from medical imaging into systems called artificial neural networks, researchers can train computers to recognize patterns linked to a specific condition, like pneumonia, cancer or a wrist fracture that would be hard for a person to see. The system follows an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.
The process, known as deep learning, is already being used in many applications, like enabling computers to understand speech and identify objects so that a self-driving car will recognize a stop sign and distinguish a pedestrian from a telephone pole. In medicine, Google has already created systems to help pathologists read microscope slides to diagnose cancer, and to help ophthalmologists detect eye disease in people with diabetes.
“We have some of the biggest computers in the world,” said Dr. Daniel Tse, a project manager at Google and an author of the journal article. “We started wanting to push the boundaries of basic science to find interesting and cool applications to work on.”
In the new study, the researchers applied artificial intelligence to CT scans used to screen people for lung cancer, which caused 160,000 deaths in the United States last year, and 1.7 million worldwide. The scans are recommended for people at high risk because of a long history of smoking.
Studies have found that screening can reduce the risk of dying from lung cancer. In addition to finding definite cancers, the scans can also identify spots that might later become cancer, so that radiologists can sort patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.
But the test has pitfalls: It can miss tumors, or mistake benign spots for malignancies and push patients into invasive, risky procedures like lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.
The researchers thought computers might do better. They created a neural network, with multiple layers of processing, and trained it by giving it many CT scans from patients whose diagnoses were known: Some had lung cancer, some did not and some had nodules that later turned cancerous.
Then, they began to test its diagnostic skill.
“The whole experimentation process is like a student in school,” Dr. Tse said. “We’re using a large data set for training, giving it lessons and pop quizzes so it can begin to learn for itself what is cancer, and what will or will not be cancer in the future. We gave it a final exam on data it’s never seen after we spent a lot of time training, and the result we saw on final exam — it got an A.”
Tested against 6,716 cases with known diagnoses, the system was 94 percent accurate. Pitted against six expert radiologists, when no prior scan was available, the deep learning model beat the doctors: It had fewer false positives and false negatives. When an earlier scan was available, the system and the doctors were neck and neck.
The ability to process vast amounts of data may make it possible for artificial intelligence to recognize subtle patterns that humans simply cannot see.
“It may start out as something we can’t see, but that may open up new lines of inquiry,” said Dr. Mozziyar Etemadi, a research assistant professor of anesthesiology at Northwestern University Feinberg School of Medicine, and an author of the study.
Dr. Eric Topol, director of the Scripps Research Translational Institute in La Jolla, Calif., who has written extensively about artificial intelligence in medicine, said, “I’m pretty confident that what they’ve found is going to be useful, but it’s got to be proven.” Dr. Topol was not involved in the study.
Given the high rate of false positives and false negatives on the lung scans as currently performed, he said, “Lung CT for smokers, it’s so bad that it’s hard to make it worse.”
Asked if artificial intelligence would put radiologists out of business, Dr. Topol said, “Gosh, no!”
The idea is to help doctors, not replace them.
“It will make their lives easier,” he said. “ Across the board, there’s a 30 percent rate of false negatives, things missed. It shouldn’t be hard to bring that number down.”
There are potential hazards, though. A radiologist who misreads a scan may harm one patient, but a flawed A.I. system in widespread use could injure many, Dr. Topol warned. Before they are unleashed on the public, he said, the systems should be studied rigorously, with the results published in peer-reviewed journals, and tested in the real world to make sure they work as well there as they did in the lab.
And even if they pass those tests, they still have to be monitored to detect hacking or software glitches, he said.
Shravya Shetty, a software engineer at Google and an author of the study, said, “How do you present the results in a way that builds trust with radiologists?” The answer, she said, will be to “show them what’s under the hood.”
Another issue is: If an A.I. system is approved by the F.D.A., and then, as expected, keeps changing with experience and the processing of more data, will its maker need to apply for approval again? If so, how often?
The lung-screening neural network is not ready for the clinic yet.
“We are collaborating with institutions around the world to get a sense of how the technology can be implemented into clinical practice in a productive way,” Dr. Tse said. “We don’t want to get ahead of ourselves.”
Denise Grady has been a science reporter for The Times since 1998. She wrote “Deadly Invaders,” a book about emerging viruses. @nytDeniseGrady
Advertisement

Images Powered by Shutterstock