AI in healthcare: are we addressing the elephant in the room?
Estimated reading time: 8-10 minutes
The holy grail of healthcare is having a customized care plan for every individual, based on a holistic understanding of their data, spanning their entire health journey.
Could AI be the answer?
Popular discourse would have you believe that AI is a new and mysterious technology that has suddenly burst onto the scene, with predictive superpowers that will uncover patterns in even the largest of data sets – putting the holy grail of healthcare within arms’ reach.
It’s an appealing promise.
But it’s a terribly oversimplified one.
That’s because it’s not particularly true: AI has been around for decades. And, more importantly, it fails to acknowledge the elephant in the room: healthcare’s relationship with data is fraught with difficulties.
In a series of two posts, I will explore these difficulties, and how we can overcome them to fully reap the benefits of AI in healthcare. In this first post, let’s start by dissecting some of the biggest data challenges in relation to AI ̶ to understand how we can address them most effectively. The second post will then introduce a more detailed framework for dealing with these challenges in a systematic way.
To reap the full benefits of AI in healthcare, we first need to acknowledge that healthcare’s relationship with data is fraught with difficulties.
The 95% of the story that gets glossed over
First, it’s important to understand that AI is merely a set of tools. Powerful tools, for sure, but no more than tools. They are not new, either. Methods like machine learning and deep learning were already being taught when I was in college in the 1970s. I have worked in AI and data science ever since, and it’s not so much the methods that have changed over time – rather, we now have the computational bandwidth to apply these methods in a practical setting. But this requires more than AI alone.
Positioning AI as a solution to a complex problem is like saying that an engine will fly you from London to Hong Kong ̶ when in fact you need a whole aircraft, reliable flight plan data, carefully coordinated logistics, and highly skilled professionals both on the ground and in the air.
In 2015, Google scientists published a paper in which they estimated that only 5% of real-world machine learning solutions is composed of machine learning code. The surrounding infrastructure makes up a far bigger part of these solutions. Things like:
Machine resource management
Machine learning configuration
So far, the debate on AI in healthcare has focused on new models and algorithms – the 5% of the story that makes for captivating headlines. It’s time to start talking about the other 95% as well – the messy, difficult part, without which no model or algorithm in healthcare will work.
Learn more about Philips adaptive intelligence - applying AI in a meaningful way to improve people's lives.
One of the biggest challenges in healthcare is to capture data in a meaningful form so that you can analyze it – for example, in order to create a model or algorithm that can help you predict a disease or select the best possible care plan.
Approximately 75% of healthcare data is unstructured, captured in medical notes of various kinds. On top of that, data resides in different sources, from EMRs to Excel sheets.
And the challenges don’t end there. Once you get to a point where you have all the data and a working model or algorithm, clinical experts need to be able to work with it. Again, no easy feat, given the complexity of most hospital IT systems and the lack of interoperability between them.
One of the biggest challenges in healthcare is to capture data in a meaningful form so that you can analyze it.
Data does not generate insights by itself
The challenges lie not just in the quality of the data and the lack of interoperability. I see an even more fundamental problem in the popular discourse around AI in healthcare, and that’s the widespread notion of AI “generating insights” based on data.
There is no such thing as a data-generated insight.
Now, don’t get me wrong: I, for one, am a firm believer in the value of data to support decision-making.
But the suggestion that analyzing a large enough data set will hand you insights on a silver platter, is dangerously misleading. Particularly in healthcare.
Let me elaborate.
AI methods like deep learning are geared towards finding patterns in data. That means they could reveal patterns that are statistically correct – but practically meaningless, and potentially misleading.
For example, a deep learning program may suggest that eating potatoes is lethal, because everyone who has ever eaten a potato is either dead or will die. The correlation is there, but it is fully coincidental, and therefore meaningless. Absurd as this example may seem, you can imagine a more complex variation where it takes an experienced clinician to ascertain the practical usefulness of a pattern.
AI points to patterns, but it doesn’t understand cause and effect, like a person does. Data does not “generate insights” by itself. It’s still up to a clinical expert to form and validate insights.
AI can help detect patterns in data, but it’s still up to a clinical expert to form and validate insights.
Addressing the risk of data bias
It also takes a clinical expert to understand the limitations of a model or algorithm, and to see how bias could inadvertently creep in.
A ‘data-driven’ approach has the connotation of being objective. This may blind us to the fact that the output of a deep learning program depends on the data it was trained on, and the input it is given.
For example, if I train a deep learning program to suggest a care plan for patients with congestive heart failure, the model will only work for the particular type of patients it was trained on. If I apply the same model to a patient who has a related but different heart condition, like hypertrophic cardiomyopathy, the model will still give me a recommendation – but it won’t be a reliable one.
You could see how bias could easily creep in, if your training data doesn’t fully reflect the target population. As an industry, we need to keep this in mind as we scale AI-enabled solutions across the globe. For example, a deep learning program trained on data from patients in Beijing could lead to erroneous conclusions when applied to patients in New York, and vice versa. Of course, the program can be retrained with local data, or other corrections can be applied. But again, it takes a clinical expert to understand contextual nuances like this, and to deal with them wisely.
What these example also highlight, is that current-day applications of AI are inherently narrow: they perform a specific task, which can be extremely useful, but they can’t reason beyond that (as opposed to general AI, which emulates human thinking, and which only exists in sci-fi movies today).
If I describe my car to a deep learning program that was designed to detect measles, and if I tell it that my car has rust spots and tends to overheat, the best the program can do is tell me that my car has the measles!
Of course, this is another absurd example, but it shows why data science and clinical domain knowledge need to go hand in hand.
Data science and clinical domain knowledge need to go hand in hand.
It’s time for a broader perspective
My goal with this post was not to paint a bleak picture for the future of AI in healthcare. On the contrary: I have made this field my life’s work precisely because of the promise it holds for better care.
However, I think we should be taking a much broader view on AI – a view that covers the full spectrum of data processing, addressing data quality and interoperability as well. Perhaps more importantly, we need to realize that the interaction between clinical experts and AI is far more important than what AI can do by itself.
In the follow-up to this post, I will introduce a framework which aims to achieve precisely this – covering not just the must-hyped 5% of creating a model or algorithm, but the full 100% of the process that will truly help healthcare providers achieve better outcomes with AI.
About Innovation Matters
Innovation Matters delivers news, opinions and features about healthcare, and is focused on the professionals who work within the industry, as well as Philips as a cutting-edge health technology organization. From interviews with industry giants to how-to guides and features powered by Philips data, our goal is to deliver interesting, educational and entertaining content to empower and inspire all those who work in healthcare or related industries.
Chief Scientific Officer, Data Science and Artificial Intelligence, Philips
John Huffman currently leads the Data Science and Artificial Intelligence business within Philips – HealthSuite Insights. He has worked in and around artificial intelligence for over 40 years, and has been in healthcare for over 30 years.