AI in healthcare: a six-step framework for getting the most out of your data
Estimated reading time: 9-11 minutes
Healthcare data without analytics is just a bunch of bits and bytes. Analytics without the right data doesn’t do any wonders either. And even with the right data and the right analytical tools, it’s still the clinical expert who needs to interpret and validate the output.
New algorithms are making headlines every week – but ultimately, it’s the value for the healthcare provider and for the patient that counts. We need to marry data science with clinical expertise, ensuring the right data is used in the right way to achieve the right outcomes.
What does this look like in practice?
In this second and final post, I will lay out six steps that will help you think about AI in a more holistic way: as an end-to-end process that covers the whole spectrum of data processing, building on a close collaboration between data scientists, IT, and clinical domain experts. Healthcare institutions that adopt this framework will be the first to fully benefit from the transformative power of AI.
A six-step framework for the development and deployment of AI-enabled solutions
1. Collecting the right data
AI methods like deep learning require large datasets to be trained on and produce meaningful results. This poses a challenge in healthcare, where the right data is often not readily available.
Freeing data from silos should be a top priority. You can’t ask AI to suggest a care plan for a patient with congestive heart failure based on radiology data alone, or on cardiology data alone. For AI to provide the most value, it needs to be able to draw from a wide range of data – including ambulatory care or home monitoring programs. Making systems interoperable has never been more important.
To leverage the predictive power of AI, we also need to start collecting more longitudinal data about the efficacy of care plans and pathways. This data is mostly lacking today, putting a constraint on what AI can help to predict in terms of patient outcomes. When efficacy data is available, it is often tied to an episode of care (did the surgery go well?), rather than to the full care journey (what was the patient’s condition one year later?).
Value in healthcare is created over time. We should start measuring accordingly. Only then will AI truly bring us closer to the holy grail of evidence-based, individualized care plans. If we can’t tell AI how different types of treatment resulted in different outcomes for patients with different characteristics, it will have no way of coming up with relevant care plan recommendations for a particular patient.
To leverage the predictive power of AI, we need to start collecting more longitudinal data about the efficacy of care plans and pathways.
2. Getting the data into good shape
Before data can be fed into a model or algorithm, it needs to be cleansed. This is usually one of the most cumbersome steps in the process – it can take up to 80% of a data scientist’s time. It is a crucial step, however. Make sure to reserve enough resources for it, or work with a partner that can help.
“Garbage in, garbage out” is an age-old principle of computer science that holds true for AI as well. The quality of your AI is only as good as the quality of the data you feed into it.
Imagine teaching your child to speak English. Suppose you make up every fifth word, and you mispronounce every tenth word. Your child will certainly learn a creative variation of English. But it will probably not get your child very far when talking to classmates in the schoolyard!
The same goes for healthcare data, which is often extremely noisy. For example, two physicians might annotate the same tumor in different ways. This variability can introduce massive error when training an algorithm to detect tumors. Similar inconsistencies exist in the way information is recorded in EMRs, leading some authors to speak of an electronic Tower of Babel.
Removing this noise and establishing a so-called “ground truth” is critical before training a model or algorithm. If you take data from different systems, you need to normalize the data first – to ensure that the same annotations actually refer to the same thing. The input that goes into the model or algorithm needs to be consistent, otherwise its output will be unpredictable.
The quality of your AI is only as good as the quality of the data you feed into it.
3. Deriving a model from your data
The next step is to derive a model or algorithm from your data. The ubiquitous use of the term ‘AI’ suggests it’s a monolithic technology, but this could not be further from the truth. There are hundreds and hundreds of different analytical methods, and the choice of method heavily impacts the result.
For example, if I use the same dataset with four different deep learning programs, I could easily get four different models, with four very different results.
It takes a data scientist to understand the nuances of what each method does, and how they should be configured. This is an expertise you can either start building in-house – as I have seen many larger hospitals do over the last few years – or outsource to external experts. A combination of these two approaches is also possible.
It is important to keep in mind though, that a data scientist won’t be able to judge where the resulting model or algorithm makes clinical sense. That’s why you need a tight marriage between people with an understanding of data science, and people with clinical domain knowledge. It’s up to the clinical expert to determine whether the result is correct and useful.
You need a tight marriage between people with an understanding of data science, and people with clinical domain knowledge.
4. Clinically validating your model
Why is clinical validation of a model or algorithm so important?
As we saw in the first post of this series, there are several reasons for this. The model or algorithm could reveal a pattern in the data that is practically meaningless or even misleading. It is also important to be aware of limitations in the applicability of the model, for example because it was trained on a limited data set.
AI can augment the healthcare provider, but it is useless without their expertise. It’s best to think of AI-enabled solutions as an extension of human capabilities, rather than as a replacement (which is why, at Philips, we like to speak of adaptive intelligence).
It’s best to think of AI-enabled solutions as an extension of human capabilities, rather than as a replacement.
5. Operationalizing the model in a clinical environment
Designing a model or algorithm in a computer lab is one thing. Getting it to work in an operational context is another.
The challenge here, again, is that we have no common representations of clinical pathways or workflows in healthcare. Different institutions and different departments use different systems. You cannot simply plug an AI-enabled solution into every system. It must be designed with a particular workflow in mind, which is another reason why clinical experts need to be involved from the start – as well as IT.
This also raises the question: how can we deploy AI-enabled solutions in healthcare more widely, and more efficiently? I strongly believe that the adoption of open standards in workflow technology is a big part of the answer. When, at Philips, we built HealthSuite Insights – an end-to-end platform for the development and deployment of AI-enabled solutions in healthcare – we spoke to a lot of people in the data science community, as well as clinical domain experts and hospital executives. They all stressed the need for interoperability, which is why we incorporated open standards from the get-go.
AI-enabled solutions need to be designed with clinical workflows in mind.
6. Creating a larger ecosystem to share and scale solutions
What open standards also allow is for different actors in the healthcare industry – hospitals, larger established companies, start-ups – to start sharing AI assets. No single vendor is going to come up with all-encompassing answers for today’s and tomorrow’s challenges in healthcare. Co-creation and partnerships are the way forward.
Let’s start with the foundation
As an industry, let’s be bold in our ambitions with AI. We should not let the operational complexities in healthcare discourage us from pursuing progress. Nor should we gloss over them in the hope that AI will magically solve all our problems. Rather, let’s address these complexities head-on.
Let’s focus first on getting the basics right: how we unlock and exchange data in healthcare, how we bring clinical experts and data scientists together, how we embed solutions into workflows at scale.
Healthcare institutions that make the biggest strides in AI will be the ones that invest in data quality, data exchange, and data management – laying a solid foundation for a future in which we will be able to predict patient outcomes and personalize care plans with ever more precision.
About Innovation Matters
Innovation Matters delivers news, opinions and features about healthcare, and is focused on the professionals who work within the industry, as well as Philips as a cutting-edge health technology organization. From interviews with industry giants to how-to guides and features powered by Philips data, our goal is to deliver interesting, educational and entertaining content to empower and inspire all those who work in healthcare or related industries.
Chief Scientific Officer, Data Science and Artificial Intelligence, Philips
John Huffman currently leads the Data Science and Artificial Intelligence business within Philips – HealthSuite Insights. He has worked in and around artificial intelligence for over 40 years, and has been in healthcare for over 30 years.