The potential impact of AI on the diagnosis and development of treatment for neurological diseases is significant. In August last year, a study demonstrated how AI could detect markers of Parkinson’s disease up to seven years before any symptoms appeared. This is the tip of the iceberg, and in our podcast interview with Annemie Ribben, VP Science, Evidence and Trials at icometrix, we delved further into how AI is being used in the neurology space and the developments we can expect in the near future.
Annemie currently holds the role of VP Science, Evidence and Trials at icometrix. This medical technology manufacturer offers a portfolio of AI solutions to assist healthcare with various challenges in neurological disorders, such as MS, epilepsy, dementia, and Alzheimer's disease. She is focused on further developing and using automated tools for MRI quantification to evaluate drug efficacy in clinical trials and real-world studies. Before joining icometrix, Annemie completed her master’s degree in applied mathematics and her PhD in electrical engineering focusing on population analysis of brain magnetic resonance images in dementia at the University of Leuven and also working as a research fellow at UCL.
From the start of our conversation with Annemie, it was clear that her role at icometrix aligns very closely with her personal interest in the intersection between data science and healthcare, particularly neurology. Her passion for the topics we discussed makes this podcast an interesting listen, but if you just want the highlights, here they are:
An overview of icometrix (3:28): Annemie outlines the aim of icometrix as “[to] significantly improve the daily lives of people with brain disorders through AI-driven precision medicine.” The company is approaching this by developing AI tools focusing on brain MRI images. Icometrix helps quantify subtle brain changes that cannot be assessed visually. This allows earlier detection of disease, monitoring of disease progression, or detecting earlier side effects of treatments. The tools are combined into a care management platform for health providers looking at neurological disorders, including MS and dementia.
What type of ML is being used? (15:34): When asked which type of ML enhancements icometrix is pursuing, Annemie explained that when the company started, there was a focus on image-driven and intensity modeling, combined with prior information. This has shifted with deep learning models entering the mainstream, and the company now concentrates entirely on using convolutional neural networks to build algorithms, with a focus on the specific image-based biomarkers that care providers use to diagnose and treat neurological disorders.
Measuring neurological disorders with biomarkers (22:28): When looking at how ML can help treat patients early, Annemie spoke about the need for biomarkers and prediction models that answer how we identify these patients in the early stages. There needs to be a focus on finding the optimal combination of biomarkers that helps to find patients as early as possible in the particular progression of a disease.
The challenges of data quality (31:47): One of the most significant issues Annemie encounters in her work is data quality. Although there are guidelines on what type of imaging you can acquire and what quality should be, only 5% of sites adhere to these guidelines. This has a significant impact. For example, if a small lesion falls between slices of a scan, it cannot be identified. The biggest improvement needs to be in data quality, which is needed to allow the type of technology offered by icometrix to scale. As a result, icometrix devotes significant resources to automatically detecting quality deviations from providers who use their imaging platform.
Precision medicine in neurology (45:35): We asked how Annemie sees the industry evolving in the next 3-5 years. She is keen to see the precision medicine that is being explored in oncology make progress in the field of neurological disease. There is still a lack of information about disease progression and how disease-modifying treatments (DMT) impact the course of disease. A big piece of moving towards that vision is in better measurement that allows us to learn more about the disease and move towards precision medicine to treat it more effectively.
Further Reading: If you want to learn more about what icometrix is doing in the neurological disease space using machine learning, the icometrix website is a great starting point. The company also regularly attends and speaks at neurology and radiology conferences, so there is a chance to hear more and talk to the team there.
Here is where we dig a little deeper into one of the themes; we spoke on the podcast about the challenges of poor data quality but didn’t get much of a chance to explore ways we can improve this.
One aspect that will encourage organizations to concentrate on improving data quality will be seeing the results of what can be achieved with high-quality data. Companies like icometrix are playing a vital role in educating the industry on the limitations of poor-quality medical imaging. But what else can be done?
Part of this falls on the shoulders of data science and ML specialists. They have a responsibility to ensure data quality across the organizations they work with. Before implementing an ML model that will be hamstrung by insufficient, poor-quality data, they need to work on a foundational project. This should evaluate and benchmark the quality of the data being used, which is as important as the quality of the predictions being produced by models trained on that data. This is essential to improve the output of the ML models they later implement. At CorrDyn, we focus on helping our clients benchmark and achieve the data quality needed to set them up for success.
Every modeling project starts with a baseline model to determine the extent to which the target output can be predicted at the required degree of accuracy, within the required latency, using the available training data. This baseline model also enables the team to begin analysis of errors in the data. Errors in a baseline model (and any model) typically result from one of the following:
1. Mislabeled data
2. Data of differing quality
3. Inadequate featurization to enable the model to pick up on the target
4. Insufficient examples of certain types of input or output
5. A model that is not tuned adequately to infer based on the available data
6. Imbalanced data that causes the model to ignore relevant features
From the beginning and throughout the modeling process, exploration of training examples where the model mis-predicts the data will tell a story about where the data and model can be improved, and the categories of data quality issues that can be addressed. This leads naturally to a discussion about how data quality issues can be identified in advance, during data collection, so that the models can only ingest data that meets quality standards. In situations like icometrix, where the data producers are also model consumers who have a vested interest in ensuring that predictions are accurate, companies can create a virtuous feedback loop that solicits the data producers to take ownership of data quality and provides guidance to them about how to improve data quality.
In situations where data producers are not model consumers, more negotiation and stakeholder management is needed to ensure that data quality meets the standards of model consumers, but undue burdens are not being placed on those who produce the data. The data producers have to see the value of producing quality data, and the guardrails in place have to be simple enough to implement that data producers accept what is expected of them in advance.
CorrDyn is adept at discovering data quality issues, identifying opportunities to validate data quality throughout the data collection pipeline, implementing automated data quality measures, and ensuring that the feedback loop accrues value to the model’s bottom line, all while considering the cost and organizational burden of additional process relative to the ROI of the modeling initiative. We often find that our clients benefit from having a third party who can navigate stakeholder incentives without the burden of history to ensure that everyone gets what they want: a machine learning pipeline being fed by high-quality data that delivers on organizational expectations for cost and return.
If you would be interested to see how CorrDyn can help improve data quality and implement the most effective ML and AI approaches for your organization, get in touch with CorrDyn for a free SWOT analysis.
Want to listen to the full podcast? Listen here: