AI improves value in radiology, but needs more clinical evidence

Radiologists have to agree on what kind of evidence matters, especially in the clinical setting, before they can unleash the power of medical imaging AI in patient care.
By Mélisande Rouger
01:19 AM

Ever since the movement surrounding value-based healthcare started, radiologists have understood the potential of showing their contribution in patient care, from disease prediction to follow-up. These are figures from the 4G age.

GSMA has published a new report on the 'Future of Devices in the age of 5G networks' in January 2020, and it identifies China as the place in which this next mobile revolution will adopt quickest. Nearly 50% of Chinese consumers say they will get a 5G phone as soon as the service is available, compared to 30% in the US and between 15-20% in Europe.

Taking a look at this closer, the GSMA survey shows that, for mobile phone customers, 5G is all about speed and coverage. Fifty-three percent say that they expect 5G to bring faster networks, and 37% look forward to better coverage. Only 27%, in contrast, expect cool new services, and no more than 23% say that 5G is about connecting new tools to the mobile networks.

“Our goal must be to deliver value to patients and not just decrease cost. A number of companies have done so, with strategies to contain costs by focusing on quality. Better health is less expensive: if we can keep people healthy, we can add value and decrease our costs,” said Charles E Kahn, professor and vice chairman of radiology at the University of Pennsylvania, US, during the Triangulo Meeting in Madrid in January.

Boosting value in procedure selection and protocols, findings and diagnosis

The radiology value chain starts when selecting the most appropriate and cost-effective imaging procedure that will enable reduced radiation and contrast use, and to make diagnosis sooner. However radiologists often don’t participate in the decision as to which exam should be prescribed. 

There are opportunities for AI to improve procedure selection, according to Kahn, who suggested AI systems could pull information from the electronic health records on diagnoses, problems, known allergies, etc. to improve precision of selection criteria. 

“We can use deep learning to look at patterns of previous patients who had those conditions and what procedures they had which were most effective for them and use that information to create algorithms that will help select imaging procedures,” he said. 

Deep learning (DL) could also replace rule-based approaches regarding exam protocol selection with contrast. DL systems could help determine whether contrast should be administered intravenously or orally and determine scan parameters, to maximise the information that is available to answer clinical questions. 

There’s also an opportunity for AI to improve display protocols that define how radiologists view studies in their PACS, for example when they open an MRI exam for viewing. DL could help automatically arrange image display by using previous patterns and identifying image series that are likely to be useful, based on preferences. “Some PACS vendors are developing intelligent ways of mapping images that watch what you do when you select the images,” he said.

Findings are what people have thought of most with AI technology. A lot of work has been done over the past ten years to advance image segmentation, for example brain tumor segmentation to measure the edema surrounding the lesion to assess therapy response. There has also been some progress in AI-fully automated abdominal CT interpretation.

Researchers have developed and tested AI systems based on deep convolutional neural networks (CNNs), for automated real-time triaging of adult chest radiographs on the basis of the urgency of imaging appearances. 

In the UK, such systems have helped patients in chest radiograph (CXR) triage, classifying them as 8% critical, 40% urgent, 26% non-urgent and 26% normal. The average reporting delay was reduced from 11.2 to 2.7 days for critical imaging findings.* 

Segmentation can be used to determine the extent of disease, and assess diagnosis, staging and imaging phenotypes, and then monitor disease.

AI segmentation tools could also help radiologists on a daily basis. “Many of our CT scans are on cancer patients we follow up every three months, and we need to track lesions and measure them to adjust their therapies. Not only is this tedious work, but also a real opportunity to improve all of these measurements with AI,” Kahn said.

Opportunities in reporting and prediction 

Large amounts of information that involve text is available in most hospitals’ electronic systems. This information can be unlocked, extracted and used to help train AI systems. 

At Penn Medicine, machine learning (ML) and natural language processing (NLP) are combined to categorise tumor response in radiology reports. Radiologists have a policy to include a code in every study’s report to indicate tumor growth or regression, and with this code, the medical team can extract information from radiology reports that is relevant for patient management. “An ideal system would link pathology results with radiology procedures so that radiologists know the outcomes of their biopsies,” Kahn suggested.

As for predicting diseases, DL models can predict a patient’s risk of breast cancer, which may allow physicians to use DL models to predict a patient’s risk of cancer, but also help take measurements during opportunistic screening, i.e. when searching images routinely for conditions that suggest a health risk. 

For example, AI can measure coronary artery calcification on chest CT to assess a patient’s risk of heart disease. “Having the information that early means being able to provide better patient prognosis,” he said.

AI-boosted opportunistic screening may also prove useful in osteoporosis, abdominal aortic aneurysm, atherosclerosis, emphysema and cirrhosis.

What kind of clinical evidence is needed?

There is a lot of discussion on what is the best computing technique to train the computer, such as supervised learning vs. unsupervised learning, and the inherent challenges.

Once the algorithm has been trained, a lot of work still remains. “There is a need for testing and fine-tuning it to further improve its overall accuracy.  After that, a large external validation phase is mandatory,” Luis Martí-Bonmatí, professor of radiology at La Fe University Hospital in Valencia, Spain, said after the meeting.

A recent study has showed that this requirement is not always fulfilled. Researchers from South Korea have found that only 31 (6%) of 516 eligible published studies of AI DL systems performed external validation testing data; and none of these studies adopted all three design features - diagnostic cohort design, inclusion of multiple institutions and prospective data collection - for external validation.

For AI to become mainstream, clinical evidence must be as strong as for any other area of science. “Real world evidence for AI needs the same standards as any other scientific research study regarding evidence level and recommendation,” Martí-Bonmatí said.

Transfer learning is a must in healthcare and AI, especially since data is scarce. But it’s not clear whether all studies should be generalised from one patient population to the other. Obtaining heterogeneity of data, i.e. making sure that the data comes from different hospitals and patient sets, is certainly a challenge in training AI models.

But there are voices in favor of using homogeneous data too. When a team trains an algorithm in a hospital, they use local equipment. Since scanners are not the same from one institution to another, the patterns learnt by AI may change as well. 

It’s also important to have a mix of cases that are representative of the population one is looking at to train the algorithm with patterns that are relevant for that population. A major issue remains that a lot of the work around technology isn’t always done to answer clinical questions. 

Radiologists still have to decide what they expect of AI. A new technology doesn’t necessarily need to be better than what is currently available, Kahn explained. “AI need not be Superhuman (….) we still have to fully understand how we use AI,” he concluded.

This article was first published in the latest edition of HIMSS Insights, Data Meets Privacy. Healthcare IT News and HIMSS Insights are HIMSS Media publications.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.