red cross challenge topbanner

Oct 02, 2018

AI is not a cure-all, but it can make clinical research (a little bit) easier

By Georgy Shakirin
Senior Research Scientist at Philips Research

Estimated reading time: 7-9 minutes

Artificial intelligence (AI) is not the proverbial Holy Grail for healthcare. It’s not as simple as data + AI = insights that can help us live longer, make the lives of clinicians easier, and cost health systems a lot less money.

As a research scientist in healthcare, I know that the reality of clinical research in AI is a little more complex.


In this blog, I’d like to take a look at why that is, together with the approach we’re taking to overcome some of the main roadblocks, which I hope will enable faster and more reliable work.   

Tackling the roadblocks


Take the hospital as an example, where so much groundbreaking research happens.


Traditionally, research using machine learning techniques has been mainly carried out by using datasets that were explicitly created for study purposes. On the plus side, this ensures uniformity of the data and reproducibility of results for patients from the same cohort. But on the down side, it means we miss the opportunity to learn from the rich new (and diverse, or heterogeneous) data that hospitals have.


Yet this very explosion of new information -- in the form of large datasets of medical imaging and non-image data such as electronic medical records or clinical or genomic data -- presents its own challenges. Approximately 75% of healthcare data is unstructured. Medical notes or images can vary hugely depending on the doctor, machine, or hospital (which each has different protocols). All these data needs to be captured and structured, then quality controlled.


And even when you have your AI model or algorithm, the lack of interoperability between different hospital IT systems may mean that the clinicians you built it for can’t use it. It means that research can be slowed down, or miss out altogether on valuable input from experts, just because that person happens to be in another building.

Healthcare data can vary hugely depending on the doctor, machine or hospital that created it – all these data needs to be captured and structured before it can be used to build an algorithm.

fhi doctors

The AI toolset

I think we can make things easier for ourselves. We’ve developed a research infrastructure, or toolset, which supports the end-to-end creation and evaluation of machine learning algorithms and models. The set, which is built on open standards to support interoperability, comprises two parts:


  • An annotation and visualization platform to collect, prepare (visualize and annotate), and interpret the data in order to speed up workflow and model development (Philips IntelliSpace Discovery ).
  • data science platform to develop the AI model (we used HealthSuite Insights, a platform for the development and deployment of AI algorithms and models in healthcare).

Researchers can use these two platforms iteratively, to annotate with one, to build AI algorithms with other, then visualize the results with the first, then improve with the other, and so on.


Perhaps rather than explaining how to use the set in the abstract, it might help to look at a specific example where we applied this infrastructure concept at a university hospital:


I was recently part of a multidisciplinary team that built and tested a deep learning model to automatically detect, localize and segment brain tumors using a combination of functional and morphologic magnetic resonance imaging (MRI), which show not only the size and shape of the tumor and the surrounding edema, but also blood flow to the affected area.


Glioblastoma (GB) is the most aggressive type of brain cancer, killing patients on average in just over a year. One of the reasons for this is that GB tumors vary a great deal in biology, which makes them hard to treat.


Detection of the location and extension of the different tumors – called segmentation - is important for surgery planning and in-depth analysis of the tumor’s aggressiveness, but requires radiologists to use hand drawing tools on the MRI images, which takes around 10 to 30 minutes per patient and can be highly variable depending on the level of expertise. Due to lack of time, this manual segmentation is often not carried out.


Our team, made up of research scientists from Philips and clinical radiologists from a university hospital took MRI scans from 64 patients diagnosed with GB, from 15 different institutions with varying protocols.


We used the toolset of research platforms for the front-end data visualization and annotation, and the back-end development and deployment of the deep learning model. Although manual segmentation is still the gold standard, I believe that the model could potentially help to save valuable time for radiologists in practice.


My research partner Jan Borggrefe, Executive Senior Radiologist, University Hospital Cologne, told me recently why he thinks the results are significant: “Artificial intelligence is changing the way that radiologists look at brain tumors. The results are clinically important but also relevant for further AI research in this.

intellispace portal advancing care neurology

Clinicians have the final word


The final – and perhaps most crucial -- part of this blog is about what these platforms cannot do. I believe that this toolset is an excellent environment for creating clinical algorithms or models. But we need to bear in mind that clinician must always make the final decision.


Why? Because I believe the complexity of healthcare is underestimated. For me, clinical decisions require a combination of scientific understanding of the problem with a reliance on intuition and the clinician’s experiences as a person. It’s about understanding the patients and which ones need which care. How does she feel? How should we communicate our recommendations to her? How does that change over time?

We need to bear in mind that clinicians must always make the final decision.

This is why my work right now focuses on trying to take the time-consuming work – such as this automatic segmentation -- from radiologists so that they can spend more time with their patients. 

So where do we go from here?


The GB study was based on pre-operative patient data. Right now, we’re looking into how we can study patients over time, using the research toolset to add longitudinal data, automatically analyzing all new examinations with GB, including clinical and genomic data in our extended machine learning models.


If I think back to the formula from the start of this blog, I’m reminded that clinical research isn’t as simple as: data + AI = insights. It’s more like: data + AI platforms + human expertise = insights. It doesn’t exactly trip off the tongue, but it should help to make clinical research just that little bit easier.

Share on social media

  • Link copied



Georgy Shakirin

Georgy Shakirin

Senior Research Scientist at Philips Research

Georgy is an experienced scientist and project manager with a history of working in the hospital and healthcare industry. He currently focuses his work on AI in radiology.

Follow me on

You are about to visit a Philips global content page


You are about to visit the Philips USA website.

I understand

Our site can best be viewed with the latest version of Microsoft Edge, Google Chrome or Firefox.