Logo

The Data Daily

A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment - Communications Medicine

A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment - Communications Medicine

A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment
Abstract
Background
Fetal ultrasound is an important component of antenatal care, but shortage of adequately trained healthcare workers has limited its adoption in low-to-middle-income countries. This study investigated the use of artificial intelligence for fetal ultrasound in under-resourced settings.
Methods
Blind sweep ultrasounds, consisting of six freehand ultrasound sweeps, were collected by sonographers in the USA and Zambia, and novice operators in Zambia. We developed artificial intelligence (AI) models that used blind sweeps to predict gestational age (GA) and fetal malpresentation. AI GA estimates and standard fetal biometry estimates were compared to a previously established ground truth, and evaluated for difference in absolute error. Fetal malpresentation (non-cephalic vs cephalic) was compared to sonographer assessment. On-device AI model run-times were benchmarked on Android mobile phones.
Results
Here we show that GA estimation accuracy of the AI model is non-inferior to standard fetal biometry estimates (error difference −1.4 ± 4.5 days, 95% CI −1.8, −0.9, n = 406). Non-inferiority is maintained when blind sweeps are acquired by novice operators performing only two of six sweep motion types. Fetal malpresentation AUC-ROC is 0.977 (95% CI, 0.949, 1.00, n = 613), sonographers and novices have similar AUC-ROC. Software run-times on mobile phones for both diagnostic models are less than 3 s after completion of a sweep.
Conclusions
The gestational age model is non-inferior to the clinical standard and the fetal malpresentation model has high AUC-ROCs across operators and devices. Our AI models are able to run on-device, without internet connectivity, and provide feedback scores to assist in upleveling the capabilities of lightly trained ultrasound operators in low resource settings.
Plain language summary
Despite considerable progress in maternal healthcare, maternal and perinatal deaths remain high in low-to-middle income countries. Fetal ultrasound is an important component of antenatal care, but shortage of adequately trained healthcare workers has limited its adoption. We developed and validated an automated system that enables lightly-trained community healthcare providers to conduct ultrasound examinations. Our approach uses artificial intelligence to automatically interpret ultrasound video acquired by sweeping an ultrasound device across the patient’s abdomen, a procedure that can easily be taught to non-experts. Our system consists of a low cost battery-powered ultrasound device and a smartphone, and can operate without internet connectivity or other infrastructure, making it suitable for deployment in low-resourced settings. The accuracy of our method is on par with existing clinical standards. Our approach has the potential to improve access to ultrasound in low-resource settings.
Introduction
Despite considerable progress in maternal healthcare in recent decades, maternal and perinatal deaths remain high with 295,000 maternal deaths during and following pregnancy and 2.4 million neonatal deaths each year. The majority of these deaths occur in low-to-middle-income countries (LMICs) 1 , 2 , 3 . The lack of antenatal care and limited access to facilities that can provide lifesaving treatment for the mother, fetus and newborn contribute to inequities in quality of care and outcomes in these regions 4 , 5 .
Obstetric ultrasound is an important component of quality antenatal care. The WHO recommends one routine early ultrasound scan for all pregnant women, but up to 50% of women in developing countries receive no ultrasound screening during pregnancy 6 . Fetal ultrasounds can be used to estimate gestational age (GA), which is critical in scheduling and planning for screening tests throughout pregnancy and interventions for pregnancy complications such as preeclampsia and preterm labor. Fetal ultrasounds later in pregnancy can also be used to diagnose fetal malpresentation, which affects up to 3–4% of pregnancies at term and is associated with trauma-related injury during birth, perinatal mortality, and maternal morbidity 7 , 8 , 9 , 10 , 11 .
Though ultrasound devices have traditionally been costly, the recent commercial availability of low-cost, battery-powered handheld devices could greatly expand access 12 , 13 , 14 . However, current ultrasound training programs require months of supervised evaluation as well as indefinite continuing education visits for quality assurance 13 , 14 , 15 , 16 , 17 , 18 , 19 . GA estimation and diagnosis of fetal malpresentation require expert interpretation of anatomical imagery during the ultrasound acquisition process. GA estimation via clinical standard biometry 20 requires expertly locating fetal anatomical structures and manually measuring their physical sizes in precisely collected images (head circumference, abdominal circumference, femur length, among others). To address these barriers, prior studies have introduced a protocol where fetal ultrasounds can be acquired by minimally trained operators via a “blind sweep” protocol, consisting of six predefined freehand sweeps over the abdomen 21 , 22 , 23 , 24 , 25 , 26 , 27 . While blind-sweep protocols simplify the ultrasound acquisition process, new methods are required for interpreting the resulting imagery. AI-based interpretation may provide a promising direction for generating automated clinical estimates from blind-sweep video sequences.
In this study, we used two prospectively collected fetal ultrasound datasets to estimate gestational age and fetal malpresentation while demonstrating key considerations for use by novice users in LMICs: (a) validating that it is possible to build blind-sweep GA and fetal malpresentation models that run in real-time on mobile devices; (b) evaluating generalization of these models to minimally trained ultrasound operators and low-cost ultrasound devices; (c) describing a modified 2-sweep blind-sweep protocol to simplify novice acquisition; (d) adding feedback scores to provide real-time information on sweep quality.
Methods
Blind-sweep procedure
Blind-sweep ultrasounds consisted of a fixed number of predefined freehand ultrasound sweeps over the gravid abdomen. Certified sonographers completed up to 15 sweeps. Novice operators (“novices”), with 8 h of blind-sweep ultrasound acquisition training, completed six sweeps. Evaluation of both sonographers and novices was limited to a set of six sweeps—three vertical and three horizontal sweeps (Fig.  1b ).
Fig. 1: Development of an artificial intelligence system to acquire and interpret blind-sweep ultrasound for antenatal diagnostics.
a Datasets were curated from sites in Zambia and the USA and include ultrasound acquired by sonographers and midwives. Ground truth for gestational age was derived from the initial exam as part of clinical practice. An artificial intelligence (AI) system was trained to identify gestational age and fetal malpresentation and was evaluated by comparing the accuracy of AI predictions with the accuracy of clinical standard procedures. The AI system was developed using only sonographer blind-sweep data, and its generalization to novice users was tested on midwife data. Design of the AI system considered suitability for deployment in low-to-middle-income countries in three ways: first, the system interpreted ultrasound from low-cost portable ultrasound devices; second, near real-time interpretation is available offline on mobile phone devices; and finally, the AI system produces feedback scores that can be used to provide feedback to users. b Blind-sweep ultrasound acquisition procedure. The procedure can be performed by novices with a few hours of ultrasound training. While the complete protocol involves six sweeps, a set of two sweeps (M and R) were found to be sufficient for maintaining the accuracy of gestational age estimation.
Full size image
Fetal age machine learning initiative (FAMLI) and novice user study datasets
Data were analyzed from the Fetal Age Machine Learning Initiative cohort, which collected ultrasound data from study sites at Chapel Hill, NC (USA), and the Novice User Study collected from Lusaka, Zambia (Fig.  1a ) 27 . The goal of this prospectively collected dataset was to enable the development of technology to estimate gestational age 28 . Data collection occurred between September 2018 and June 2021. All study participants provided written informed consent, and the research was approved by the UNC institutional review board (IRB #18-1848) and the biomedical research ethics committee at the University of Zambia. Blind-sweep data were collected with standard ultrasound devices (SonoSite M-Turbo or GE Voluson) as well as a low-cost portable ultrasound device (ButterflyIQ). Studies included standard clinical assessments of GA 20 and fetal malpresentation performed by a trained sonographer using a standard ultrasound device.
Algorithm development
We developed two deep learning neural network models to predict GA and fetal malpresentation. Our models generated diagnostic predictions directly from ultrasound video: sequences of image pixel values were the input and an estimate of the clinical quantity of interest was the output. The GA model produced an estimate of age, measured in days, for each blind-sweep video sequence. The GA model additionally provided an estimate of its confidence in the estimate for a given video sequence. No intermediate fetal biometric measurements were required during training or generated during inference. The fetal malpresentation model predicted a probability score between 0.0 and 1.0 for whether the fetus is in noncephalic presentation. See Supplementary Materials for a technical discussion and details regarding model development.
In the USA, the ground truth GA was determined for each participant based on the “best obstetric estimate,” as part of routine clinical care, using procedures recommended by the American College of Obstetricians and Gynecologists (ACOG) 29 . The best obstetric estimate combines information from the last menstrual period (LMP), GA derived from assisted reproductive technology (if applicable), and fetal ultrasound anatomic measurements. In Zambia, only the first fetal ultrasound was used to determine the ground truth GA as the LMP in this setting was considered less reliable as patients often presented for care later in pregnancy.
The GA model was trained on sonographer-acquired blind sweeps (up to 15 sweeps per patient) as well as sonographer-acquired “fly-to” videos that capture five to ten seconds before the sonographer has acquired standard fetal biometry images. The fetal malpresentation model was only trained on blind sweeps. For each training set case, fetal malpresentation was specified as one of four possible values by a sonographer (cephalic, breech, transverse, oblique), and dichotomized to “cephalic” vs “noncephalic”. This dichotomization is clinically justified since cephalic cases are considered normal while all noncephalic cases require further medical attention.
Our analysis cohort included all pregnant women in the FAMLI and Novice User Study datasets who had the necessary ground truth information for gestational age and fetal presentation from September 2018 to January 2021. Study participants were assigned at random to one of three dataset splits: train, tune, or test. We used the following proportions: 60% train/20% tune/20% test for study participants who did not receive novice sweeps, and 10% tune/90% test for participants who received novice sweeps. The tuning set was used for optimizing machine learning training hyperparameters and selecting a classification threshold probability for the fetal malpresentation model. This threshold was chosen to yield equal noncephalic specificity and sensitivity on the tuning set, blinded to the test sets. None of the blind-sweep data collected by the novices were used for training.
Cases consisted of multiple blind-sweep videos, and our models generated predictions independently for each video sequence within the case. For the GA model, each blind sweep was divided into multiple video sequences. For the fetal malpresentation model, video sequences corresponded to a single complete blind sweep. We then aggregated the predictions to generate a single case-level estimate for either GA or fetal malpresentation (described further in the Mobile Device Inference section in supplementary materials).
Evaluation
The evaluation was performed on the FAMLI (sonographer-acquired) and Novice User Study (novice-acquired) datasets. Test sets consisted of patients independent of those used for AI development (Fig.  1a ). For our GA model evaluation, the primary FAMLI test set comprised 407 women in 657 study visits in the USA. A second test set, “Novice User Study” included 114 participants in 140 study visits in Zambia. Novice blind-sweep studies were exclusively performed at Zambian sites. Sweeps collected with standard ultrasound devices were available for 406 of 407 participants in the sonographer-acquired test set, and 112 of 114 participants in the novice-acquired test set. Sweeps collected with the low-cost device were available for 104 of 407 participants in the sonographer-acquired test set, and 56 of 114 participants in the novice-acquired test set. Analyzable data from the low-cost device became available later during the study, and this group of patients is representative of the full patient set. We randomly selected one study visit per patient for each analysis group to avoid combining correlated measurements from the same patient. For our fetal malpresentation model, the test set included 613 patients from the sonographer-acquired and novice-acquired datasets, resulting in 65 instances of noncephalic presentation (10.6%). For each patient, the last study visit of the third trimester was included. Of note, there are more patients in the malpresentation model test set since the ground truth is not dependent on a prior visit. The disposition of study participants are summarized in STARD diagrams (Supplementary Fig.  1 ) and Supplementary Table  1 .
Reporting summary
Further information on research design is available in the  Nature Research Reporting Summary linked to this article.
Results
Mobile-device-optimized AI gestational age and fetal malpresentation estimation
We calculated the mean difference in absolute error between the GA model estimate and estimated gestational age as determined by standard fetal biometry measurements using imaging from traditional ultrasound devices operated by sonographers 20 . The reference ground truth GA was established based on an initial patient visit as described above in Methods. When conducting pairwise statistical comparisons between blind sweep and standard fetal biometry absolute errors, we established an a priori criterion for non-inferiority which was confirmed if the blind-sweep mean absolute error (MAE) was less than 1.0 day greater than the standard fetal biometry’s MAE. Statistical estimates and comparisons were computed after randomly selecting one study visit per patient for each analysis group, to avoid combining correlated measurements from the same patient.
We conducted a supplemental analysis of GA model prediction error with mixed effects regression on all test data, combining sonographer-acquired and novice-acquired test sets. Fixed effect terms accounted for the ground truth GA, the type of ultrasound machine used (standard vs. low cost), and the training level of the ultrasound operator (sonographer vs. novice). All patient studies were included in the analysis, and random effects terms accounted for intra-patient and intra-study effects.
GA analysis results are summarized in Table  1 . The MAE for the GA model estimate with blind sweeps collected by sonographers using standard ultrasound devices was significantly lower than the MAE for the standard fetal biometry estimates (mean difference −1.4 ± 4.5 days, 95% CI −1.8, −0.9 days). There was a trend toward increasing error for a blind sweep and standard fetal biometry procedures with the gestational week (Fig.  2a ).
Table 1 Gestational age estimation.
These authors contributed equally: Ryan G. Gomes, Bellington Vwalika, Chace Lee, Angelica Willis.
These authors jointly supervised this work: Jeffrey S. A. Stringer, Shravya Shetty.
Authors and Affiliations
Google Health, Palo Alto, CA, USA
Ryan G. Gomes, Chace Lee, Angelica Willis, Marcin Sieniek, Christina Chen, James A. Taylor, Scott Mayer McKinney, Charles Lau, Terry Spitz, T. Saensuksopa, Kris Liu, Tiya Tiyasirichokchai, Jonny Wong, Rory Pilgrim, Akib Uddin, Greg Corrado, Lily Peng, Katherine Chou, Daniel Tse & Shravya Shetty
Department of Obstetrics and Gynaecology, University of Zambia School of Medicine, Lusaka, Zambia
Bellington Vwalika, Margaret P. Kasaro & William Goodnight III
Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC, USA
Bellington Vwalika, Joan T. Price, Elizabeth M. Stringer, Benjamin H. Chi & Jeffrey S. A. Stringer
UNC Global Projects—Zambia, LLC, Lusaka, Zambia
Joan T. Price, Margaret P. Kasaro, Ntazana Sindano, Benjamin H. Chi & Jeffrey S. A. Stringer
Google Research, Mountain View, CA, USA
George E. Dahl & Justin Gilmer
Authors
You can also search for this author in PubMed   Google Scholar
Contributions
R.G.G., C.Lee, A.W., and M.S. developed and evaluated the artificial intelligence models. J.S.A.S., B.V., J.T.P., M.P.K., E.M.S., N.S., W.G. III, and B.H.C. developed and executed the FAMLI ultrasound data collection study. J.A.T. and S.M.M. contributed to the interpretation of model evaluation results. R.G.G. and C.C. drafted the manuscript with contributions from C.Lee, A.W., S.S., J.S.A.S., B.H.C., J.A.T., D.T., A.U., K.C., and S.M.M. G.E.D., J.G., and T.Sp. provided technical advice during the development of the artificial intelligence model. C.Lau provided interpretation of ultrasound imagery during model development. T.Sa. and K.L. conducted sonographer and patient experience research during the FAMLI data collection study. T.T. and T.Sa. created the original image elements used in Fig.  1 . J.W. and R.P. coordinated collaboration between Google Inc., University of North Carolina, and Bill and Melinda Gates Foundation. S.S., J.S.A.S., D.T., A.U., G.C., L.P., and K.C. established the research goals and direction of the study.
Corresponding authors

Images Powered by Shutterstock