Why AI in healthcare needs a focus on people, not technology
Estimated reading time: 10-12 minutes
As with any technological trend, there’s a tendency in the debate about AI to focus on technology and what it can do. In healthcare, much has already been written about how AI can identify patterns across millions of data points ̶ enabling more precise diagnosis, allowing for more personalized treatment, and improving time and cost efficiencies.
Opportunities such as these have captured the imagination of people across the industry, and with good reason: healthcare providers are drowning in data and starving for insights. AI has the potential to lend them a helping hand, alleviating the burden on overstretched healthcare systems and empowering them to achieve better outcomes at lower costs. I am as excited about this as anyone.
But while you may expect a Chief Technology Officer to focus on the technological aspects of AI, I would like to argue that the human aspects are just as important – maybe more than is commonly acknowledged.
The hardest part is not the technology itself. The hardest part is getting that technology to work in a way that is accepted, trusted, and embraced by people.
As with any change driven by technology, the hardest part is not the technology itself. The hardest part is getting that technology to work in a way that is accepted, trusted, and embraced by people. This is commonly referred to as the ‘last mile’ challenge of AI in healthcare. It involves questions such as:
How can we get algorithms to produce reliable and actionable insights that truly add value to clinical staff, fitting their daily workflow?
How can we get clinicians and patients to trust and act on AI-based recommendations?
How can we spark a spirit of AI augmenting what professionals can do, allowing them to focus on the patient, rather than fueling fears of automation and job loss?
How can we use AI to empower patients to participate in healthcare decision-making, and to help consumers adopt a healthier lifestyle?
If we are to make real strides with AI in healthcare, I believe these are the questions we should be focusing more on as an industry. Technology can save lives – but it is an enabler, not a solution by itself. True innovation is born in the interplay between people and technology. I believe this requires a thoroughly human-centric approach to AI. We should be addressing the last mile first, not last.
Maybe, in healthcare, we should even move away from the term ‘artificial intelligence’ altogether.
Let me explain why.
AI should adapt to people, not the other way round
If I look at healthcare today, I cannot help but feel amazed by the technological progress we have made over the last few decades, and how this has improved people’s lives around the world. But we also need to be self-critical: in some cases, technology may have created more challenges than it has solved.
Burdened by clerical work and inefficient systems, clinicians now spend more time with machines and with reporting than directly with their patients.i People with health trackers often don’t know what to do with the numbers they’re given.ii
Looking forward, I think these examples serve as a reminder that as an industry, we should design solutions around people’s needs, not around what’s technologically possible.
We should design solutions around people’s needs, not around what’s technologically possible.
Clinicians and nurses want to focus their time and energy on patients. Patients with chronic diseases and healthy people alike want to be empowered in taking control of their own health. Only if we design for those needs, will people fully embrace AI-enabled solutions in their daily work and lives.
I think this is a very fundamental notion, which is also deeply ingrained in our way of working at Philips. We want to help create a world where technology adapts to people, not the other way around, and certainly not a world where care is delegated to machines. That is why we prefer to talk about adaptive intelligence, rather than artificial intelligence. It is a subtle difference. But I believe a crucial one.
We want to help create a world where technology adapts to people, not the other way around, and certainly not a world where care is delegated to machines.
AI should be in the background, not in the foreground
So what does adaptive intelligence mean? First, any AI-enabled solution should be designed as a natural and helpful extension for people, like a car navigation system that supports you in finding the best route to your destination. Similarly, the most impactful applications of AI in healthcare will be created by integrating AI deep into the user interfaces and workflows of hospitals, and by embedding it almost invisibly into solutions for the consumer environment. AI will augment healthcare providers, patients, and healthy people alike.
The concept of adaptive intelligence takes this a step further, by adding contextual awareness. This means a system can learn and adapt to the skills and preferences of the person that uses it, and to the situation he or she is in. The analogy with car navigation comes to mind again: the system may offer you personalized advice, depending on your preferred routes. In a similar way, a solution like Philips Illumeo helps radiologists to speed up their workflows by recording and reproducing their hanging protocols in a consistent manner.
The most impactful applications of AI in healthcare will be created by integrating AI deep into the user interfaces and workflows of hospitals, and by embedding it almost invisibly into solutions for the consumer environment.
AI needs more than data for it to work
Crucially, clinical staff and patients or consumers need to be involved from the start in the development of such solutions. Clinicians can ask the right questions, validate machine-based recommendations, and interpret them in a clinical context where people’s health is at stake.
It is often said that “data is the new gold”. But that is only partly true. Value comes from actionable insights that are deployed wisely, and that lead to better outcomes at lower cost. That is the real gold, and we need human knowledge to mine it. At all times, clinicians need to be the ones taking the final decisions, in order to drive the best patient outcomes. Approaches that combine data and human knowledge will be the most powerful.
It is often said that “data is the new gold”. But that is only partly true. Value comes from actionable insights that are deployed wisely.
From a patient or consumer perspective, impactful AI-enabled solutions will similarly be about much more than just presenting data to people. For example, if I am looking for help in managing my chronic disease, I don’t just want a device to give me insight into my health data. I want advice what to do, and I want that advice to be tailored to my unique personality and circumstances.
AI allows us to develop solutions that adapt to such needs, providing personalized coaching to help people adhere to treatment plans or live healthy lives. A smart algorithm by itself won’t do the trick, however. Understanding human psychology is just as important. We all know how difficult it is to change behavior! Again, this is where the expertise of healthcare professionals comes in. By combining data science with behavioral science, we can support people to take control of their own health.
AI needs to be designed for trust and transparency
There is another reason why I strongly believe in co-creation when it comes to developing AI-enabled solutions: it is the only way we can build trust in these solutions. This is arguably the most crucial aspect of the ‘last mile’ challenge I introduced in the beginning of this post. Involving healthcare professionals in the design and implementation of solutions is not only essential for their effectiveness, it also instills trust from the start, and paves the way for adoption.
In addition, to build trust, we need to make sure that AI-based predictions or recommendations are not presented like a black box. Healthcare providers need to be able to understand why a certain prediction or recommendation was made. They need to be able to explain it to a patient too. With transparency comes trust. Again, that is why we need to combine data and machine learning approaches with sound scientific models and clinical knowledge.
AI-based predictions or recommendations should not be presented like a black box. Healthcare providers need to be able to understand why a certain prediction or recommendation was made.
Obviously, ensuring data security and privacy is equally important here. This should go well beyond regulatory compliance. At Philips, we strongly believe in ‘privacy by design’. This approach aims to embed privacy and data protection controls throughout the entire data lifecycle, from the early design stage to deployment, collection, use and ultimate data disposition and disposal. Privacy should never be an afterthought – it is the very foundation of trust.
Helping the human touch to triumph
Getting AI to work effectively truly is a collaborative effort, and that is what makes it so exciting. More and more, I see data scientists, designers, and clinicians in hospitals working on innovations together. Over the last few years we have also been stepping up our own AI competencies within Philips, building on our long heritage of digital innovation. 60% of our research and development professionals now work in data science and software. We have found that a combination of hiring specialists in these areas, and having them work together with engineers and scientists with deep domain knowledge, is particularly powerful.
Together, we can only make these efforts succeed if we center them around people. Coupled with the emergence of connected medical and personal health devices, AI and data science offer amazing capabilities. We have an unprecedented opportunity to help solve one of the biggest challenges in the world: providing high quality care and a good quality of life to all, at an affordable cost. By taking a people-centric approach, we can put advanced technology to a wonderful use ̶ helping the human touch to triumph across the health continuum.
Executive Vice President, Chief Technology Officer, Royal Philips
Henk van Houten joined Philips Research in 1985, where he investigated quantum transport phenomena in semiconductor nanostructures – work awarded with the Royal Dutch Shell prize.
In 2016, he became Chief Technology Officer of Royal Philips. In this role, he has global responsibility for Research, Innovation Services, the Philips HealthWorks, Philips Innovation Campus Bangalore, Group Technology Start Ups, Technology and R&D Management, and the Idea to Market Excellence Program. He is Vice Chair of the Group Innovation Board.
By clicking on the link, you will be leaving the official Royal Philips Healthcare ("Philips") website. Any links to third-party websites that may appear on this site are provided only for your convenience and in no way represent any affiliation or endorsement of the information provided on those linked websites. Philips makes no representations or warranties of any kind with regard to any third-party websites or the information contained therein.