First quantitative AI tools for medical imaging receive clearance
Quantib, a Rotterdam-based company that develops AI software for radiologists using machine learning, has received FDA clearances for Quantib™ Neurodegenerative (ND), a tool that supports radiologists with the reading of MRI brain scans.
The software measures brain atrophy (shrinkage) and detects white matter hyperintensities (WMHs), which are changes in the brain related to, for example, ageing, dementia and multiple sclerosis (MS).
The software includes fully automatic segmentation of lobes and hippocampus to objectively assess atrophy development. Additionally, it comprises white matter hyperintensity segmentation for easy monitoring neurological changes occurring in dementia and MS patients, for example.
Reference centile curves, derived from the population-based Rotterdam Scan Study, provide an intuitive tool to compare the patient’s brain volume to the average of an unbiased population, quite similar to how growth curves for children are used to track their development.
First line of products receives certification
The clearance comes within a first wave of approvals from regulatory bodies for products that have integrated machine learning and deep learning using data from other studies, an encouraging sign for the future of healthcare, according to Wiro Niessen, Quantib chief scientific officer.
“For the first time, we’re seeing FDA approvals and CE marks for products in which you do objective quantification, using data from other studies. That’s the first step towards the dot in the horizon, in which a patient is now treated with all the knowledge from previously ill patients,” he said.
Exciting tools are emerging on the market and from the academia, which is increasingly collaborating with the industry, according to Niessen, who is also a professor of biomedical image analysis at Erasmus MC in Rotterdam and Delft University of Technology.
Quantib has multiple machine learning products cleared by the FDA and CE, while having others in the pipeline. The company is backed by Erasmus MC Rotterdam and UMC Utrecht, ensuring access to large scale structured imaging data.
FAIR data and the Personal Health Train
Appropriate data must be used to develop algorithms that can further advance medical imaging and offer more optimal diagnostic precision medicine, Niessen explained.
“In imaging and health data science, the concept of FAIR data, i.e. data that are Findable, Accessible, Interoperable and Reusable, must prevail. You have to know if you have a certain question, whether these data are somewhere. The concept of FAIR means you’re able to find the data in your national or regional healthcare system, and to put them into an algorithm to train,” he said.
Imaging data, genomic data and data generated in clinical practice and held in the EPR should be made FAIR, so that they can be used to build a prognosis classifier based on that information.
The concept of FAIR data has been widely adopted in the Netherlands, where it is the basis for the Personal Health Train (PHT), an initiative that aims to connect distributed health data and to increase the use of existing health data for citizens, healthcare and scientific research.
The key idea behind the PHT is to bring algorithms to the data where they are located, rather than bringing all data to a central place, by creating FAIR data stations.
“As a hospital or organisation that has relevant data, you want to deal with your data in order to ensure that they are FAIR. Using federated learning, you want to be able to bring your software to these different places and analyse the data, bring the results back and learn from all the data,” he added.