As a research scientist in healthcare, I know that the reality of clinical research in AI is a little more complex. In this blog, I’d like to take a look at why that is, together with the approach we’re taking to overcome some of the main roadblocks, which I hope will enable faster and more reliable work.
Take the hospital as an example, where so much groundbreaking research happens. Yet this very explosion of new information -- in the form of large datasets of medical imaging and non-image data such as electronic medical records or clinical or genomic data -- presents its own challenges. Approximately 75% of healthcare data is unstructured. Medical notes or images can vary hugely depending on the doctor, machine, or hospital (which each has different protocols). All these data needs to be captured and structured, then quality controlled. And even when you have your AI model or algorithm, the lack of interoperability between different hospital IT systems may mean that the clinicians you built it for can’t use it. It means that research can be slowed down, or miss out altogether on valuable input from experts, just because that person happens to be in another building.
Traditionally, research using machine learning techniques has been mainly carried out by using datasets that were explicitly created for study purposes. On the plus side, this ensures uniformity of the data and reproducibility of results for patients from the same cohort. But on the down side, it means we miss the opportunity to learn from the rich new (and diverse, or heterogeneous) data that hospitals have.
Healthcare data can vary hugely depending on the doctor, machine or hospital that created it – all these data needs to be captured and structured before it can be used to build an algorithm.
I think we can make things easier for ourselves. We’ve developed a research infrastructure, or toolset, which supports the end-to-end creation and evaluation of machine learning algorithms and models. The set, which is built on open standards to support interoperability, comprises two parts: Researchers can use these two platforms iteratively, to annotate with one, to build AI algorithms with other, then visualize the results with the first, then improve with the other, and so on. Perhaps rather than explaining how to use the set in the abstract, it might help to look at a specific example where we applied this infrastructure concept at a university hospital: I was recently part of a multidisciplinary team that built and tested a deep learning model to automatically detect, localize and segment brain tumors using a combination of functional and morphologic magnetic resonance imaging (MRI), which show not only the size and shape of the tumor and the surrounding edema, but also blood flow to the affected area. Glioblastoma (GB) is the most aggressive type of brain cancer, killing patients on average in just over a year. One of the reasons for this is that GB tumors vary a great deal in biology, which makes them hard to treat. Detection of the location and extension of the different tumors – called segmentation - is important for surgery planning and in-depth analysis of the tumor’s aggressiveness, but requires radiologists to use hand drawing tools on the MRI images, which takes around 10 to 30 minutes per patient and can be highly variable depending on the level of expertise. Due to lack of time, this manual segmentation is often not carried out. Our team, made up of research scientists from Philips and clinical radiologists from a university hospital took MRI scans from 64 patients diagnosed with GB, from 15 different institutions with varying protocols. We used the toolset of research platforms for the front-end data visualization and annotation, and the back-end development and deployment of the deep learning model. Although manual segmentation is still the gold standard, I believe that the model could potentially help to save valuable time for radiologists in practice. My research partner Jan Borggrefe, Executive Senior Radiologist, University Hospital Cologne, told me recently why he thinks the results are significant: “Artificial intelligence is changing the way that radiologists look at brain tumors. The results are clinically important but also relevant for further AI research in this.”
The final – and perhaps most crucial -- part of this blog is about what these platforms cannot do. I believe that this toolset is an excellent environment for creating clinical algorithms or models. But we need to bear in mind that clinician must always make the final decision. Why? Because I believe the complexity of healthcare is underestimated. For me, clinical decisions require a combination of scientific understanding of the problem with a reliance on intuition and the clinician’s experiences as a person. It’s about understanding the patients and which ones need which care. How does she feel? How should we communicate our recommendations to her? How does that change over time?
We need to bear in mind that clinicians must always make the final decision.
This is why my work right now focuses on trying to take the time-consuming work – such as this automatic segmentation -- from radiologists so that they can spend more time with their patients.
The GB study was based on pre-operative patient data. Right now, we’re looking into how we can study patients over time, using the research toolset to add longitudinal data, automatically analyzing all new examinations with GB, including clinical and genomic data in our extended machine learning models. If I think back to the formula from the start of this blog, I’m reminded that clinical research isn’t as simple as: data + AI = insights. It’s more like: data + AI platforms + human expertise = insights. It doesn’t exactly trip off the tongue, but it should help to make clinical research just that little bit easier. If you’d like to find out more about this research, watch the five minute presentation by my colleague Michael Perkuhn, which won an award for the best scientific paper at the European Congress for Radiology in 2018 or read our paper.
Georgy Shakirin Connect with Georgy on:
Senior Research Scientist at Philips Research
Georgy is an experienced scientist and project manager with a history of working in the hospital and healthcare industry. He currently focuses his work on AI in radiology.
You are about to visit a Philips global content page
Continue