I think we can make things easier for ourselves.
We’ve developed a research infrastructure, or toolset, which supports the end-to-end creation and evaluation of machine learning algorithms and models. The set, which is built on open standards to support interoperability, comprises two parts:
- An annotation and visualization platform to collect, prepare (visualize and annotate), and interpret the data in order to speed up workflow and model development (Philips IntelliSpace Discovery ).
- A data science platform to develop the AI model (we used HealthSuite Insights, a platform for the development and deployment of AI algorithms and models in healthcare).
Researchers can use these two platforms iteratively, to annotate with one, to build AI algorithms with other, then visualize the results with the first, then improve with the other, and so on.
Perhaps rather than explaining how to use the set in the abstract, it might help to look at a specific example where we applied this infrastructure concept at a university hospital:
I was recently part of a multidisciplinary team that built and tested a deep learning model to automatically detect, localize and segment brain tumors using a combination of functional and morphologic magnetic resonance imaging (MRI), which show not only the size and shape of the tumor and the surrounding edema, but also blood flow to the affected area.
Glioblastoma (GB) is the most aggressive type of brain cancer, killing patients on average in just over a year. One of the reasons for this is that GB tumors vary a great deal in biology, which makes them hard to treat.
Detection of the location and extension of the different tumors – called segmentation - is important for surgery planning and in-depth analysis of the tumor’s aggressiveness, but requires radiologists to use hand drawing tools on the MRI images, which takes around 10 to 30 minutes per patient and can be highly variable depending on the level of expertise. Due to lack of time, this manual segmentation is often not carried out.
Our team, made up of research scientists from Philips and clinical radiologists from a university hospital took MRI scans from 64 patients diagnosed with GB, from 15 different institutions with varying protocols.
We used the toolset of research platforms for the front-end data visualization and annotation, and the back-end development and deployment of the deep learning model. Although manual segmentation is still the gold standard, I believe that the model could potentially help to save valuable time for radiologists in practice.
My research partner Jan Borggrefe, Executive Senior Radiologist, University Hospital Cologne, told me recently why he thinks the results are significant: “Artificial intelligence is changing the way that radiologists look at brain tumors. The results are clinically important but also relevant for further AI research in this.”