Skip to content

AI can now detect COVID-19 in lung ultrasound images

Artificial intelligence can detect COVID-19 in lung ultrasound images in much the same way that facial recognition software can detect a face in a crowd, new research shows.

The findings advance AI-powered medical diagnostics and bring healthcare professionals closer to being able to quickly diagnose patients with COVID-19 and other lung diseases with algorithms that analyze ultrasound images to identify signs of disease.

The findings, recently published in Communications MedicineThey culminate an effort that began early in the pandemic, when doctors needed tools to quickly evaluate legions of patients in overwhelmed emergency rooms.

“We developed this automated screening tool to assist clinicians in emergency settings with large caseloads of patients who need to be diagnosed quickly and accurately, such as in the early stages of the pandemic,” said lead author Muyinatu Bell. , associate of John C. Malone. Professor of Electrical and Computer Engineering, Biomedical Engineering and Computer Science at Johns Hopkins University. “Potentially, we also want to have wireless devices that patients can use at home to monitor the progression of COVID-19.”

The tool also has potential to develop wearable devices that track diseases such as congestive heart failure, which can cause fluid overload in patients’ lungs, not unlike COVID-19, said co-author Tiffany Fong, assistant professor of medicine. emergency at Johns Hopkins. Medicine.

“What we’re doing here with AI tools is the next big frontier for point-of-care,” Fong said. “An ideal use case would be wearable ultrasound patches that monitor fluid buildup and let patients know when they need a medication adjustment or when they should see a doctor.”

AI analyzes lung ultrasound images to detect features known as B lines, which appear as bright vertical abnormalities and indicate inflammation in patients with lung complications. It combines computer-generated images with real ultrasounds of patients, including some who sought care at Johns Hopkins.

“We had to model the physics of ultrasound and acoustic wave propagation well enough to get credible simulated images,” Bell said. “We then had to go a step further to train our computer models to use this simulated data to reliably interpret real scans of patients with affected lungs.”

Early in the pandemic, scientists struggled to use artificial intelligence to assess indicators of COVID-19 in lung ultrasound images due to a lack of patient data and because they were just beginning to understand how the disease manifests in people. the body, Bell said.

His team developed software that can learn from a combination of real and simulated data and then discern abnormalities in ultrasound scans that indicate a person has contracted COVID-19. The tool is a deep neural network, a type of AI designed to behave like interconnected neurons that allow the brain to recognize patterns, understand speech, and perform other complex tasks.

“At the beginning of the pandemic, we did not have enough ultrasound images of COVID-19 patients to develop and test our algorithms, and as a result, our deep neural networks never reached peak performance,” said first author Lingyi Zhao, who developed the software. while he was a postdoctoral fellow in the Bell laboratory and now works at Novateur Research Solutions. “Now, we are demonstrating that with computer-generated data sets we can still achieve a high degree of accuracy in assessing and detecting these features of COVID-19.”

The team’s code and data are publicly available here: https://gitlab.com/pulselab/covid19