As with any technological trend, there’s a tendency in the debate about AI to focus on technology and what it can do. In healthcare, much has already been written about how AI can identify patterns across millions of data points ̶ enabling more precise diagnosis, allowing for more personalized treatment, and improving time and cost efficiencies. Opportunities such as these have captured the imagination of people across the industry, and with good reason: healthcare providers are drowning in data and starving for insights. AI has the potential to lend them a helping hand, alleviating the burden on overstretched healthcare systems and empowering them to achieve better outcomes at lower costs. I am as excited about this as anyone. But while you may expect a Chief Technology Officer to focus on the technological aspects of AI, I would like to argue that the human aspects are just as important – maybe more than is commonly acknowledged.
The hardest part is not the technology itself. The hardest part is getting that technology to work in a way that is accepted, trusted, and embraced by people.
As with any change driven by technology, the hardest part is not the technology itself. The hardest part is getting that technology to work in a way that is accepted, trusted, and embraced by people. This is commonly referred to as the ‘last mile’ challenge of AI in healthcare. It involves questions such as: If we are to make real strides with AI in healthcare, I believe these are the questions we should be focusing more on as an industry. Technology can save lives – but it is an enabler, not a solution by itself. True innovation is born in the interplay between people and technology. I believe this requires a thoroughly human-centric approach to AI. We should be addressing the last mile first, not last. Maybe, in healthcare, we should even move away from the term ‘artificial intelligence’ altogether. Let me explain why.
If I look at healthcare today, I cannot help but feel amazed by the technological progress we have made over the last few decades, and how this has improved people’s lives around the world. But we also need to be self-critical: in some cases, technology may have created more challenges than it has solved. Burdened by clerical work and inefficient systems, clinicians now spend more time with machines and with reporting than directly with their patients.i People with health trackers often don’t know what to do with the numbers they’re given.ii Looking forward, I think these examples serve as a reminder that as an industry, we should design solutions around people’s needs, not around what’s technologically possible.
We should design solutions around people’s needs, not around what’s technologically possible.
Clinicians and nurses want to focus their time and energy on patients. Patients with chronic diseases and healthy people alike want to be empowered in taking control of their own health. Only if we design for those needs, will people fully embrace AI-enabled solutions in their daily work and lives. I think this is a very fundamental notion, which is also deeply ingrained in our way of working at Philips. We want to help create a world where technology adapts to people, not the other way around, and certainly not a world where care is delegated to machines. That is why we prefer to talk about adaptive intelligence, rather than artificial intelligence. It is a subtle difference. But I believe a crucial one.
We want to help create a world where technology adapts to people, not the other way around, and certainly not a world where care is delegated to machines.
So what does adaptive intelligence mean? First, any AI-enabled solution should be designed as a natural and helpful extension for people, like a car navigation system that supports you in finding the best route to your destination. Similarly, the most impactful applications of AI in healthcare will be created by integrating AI deep into the user interfaces and workflows of hospitals, and by embedding it almost invisibly into solutions for the consumer environment. AI will augment healthcare providers, patients, and healthy people alike. The concept of adaptive intelligence takes this a step further, by adding contextual awareness. This means a system can learn and adapt to the skills and preferences of the person that uses it, and to the situation he or she is in. The analogy with car navigation comes to mind again: the system may offer you personalized advice, depending on your preferred routes. In a similar way, a solution like Philips Illumeo helps radiologists to speed up their workflows by recording and reproducing their hanging protocols in a consistent manner.
The most impactful applications of AI in healthcare will be created by integrating AI deep into the user interfaces and workflows of hospitals, and by embedding it almost invisibly into solutions for the consumer environment.
Crucially, clinical staff and patients or consumers need to be involved from the start in the development of such solutions. Clinicians can ask the right questions, validate machine-based recommendations, and interpret them in a clinical context where people’s health is at stake. It is often said that “data is the new gold”. But that is only partly true. Value comes from actionable insights that are deployed wisely, and that lead to better outcomes at lower cost. That is the real gold, and we need human knowledge to mine it. At all times, clinicians need to be the ones taking the final decisions, in order to drive the best patient outcomes. Approaches that combine data and human knowledge will be the most powerful.
It is often said that “data is the new gold”. But that is only partly true. Value comes from actionable insights that are deployed wisely.
From a patient or consumer perspective, impactful AI-enabled solutions will similarly be about much more than just presenting data to people. For example, if I am looking for help in managing my chronic disease, I don’t just want a device to give me insight into my health data. I want advice what to do, and I want that advice to be tailored to my unique personality and circumstances. AI allows us to develop solutions that adapt to such needs, providing personalized coaching to help people adhere to treatment plans or live healthy lives. A smart algorithm by itself won’t do the trick, however. Understanding human psychology is just as important. We all know how difficult it is to change behavior! Again, this is where the expertise of healthcare professionals comes in. By combining data science with behavioral science, we can support people to take control of their own health.
There is another reason why I strongly believe in co-creation when it comes to developing AI-enabled solutions: it is the only way we can build trust in these solutions. This is arguably the most crucial aspect of the ‘last mile’ challenge I introduced in the beginning of this post. Involving healthcare professionals in the design and implementation of solutions is not only essential for their effectiveness, it also instills trust from the start, and paves the way for adoption. In addition, to build trust, we need to make sure that AI-based predictions or recommendations are not presented like a black box. Healthcare providers need to be able to understand why a certain prediction or recommendation was made. They need to be able to explain it to a patient too. With transparency comes trust. Again, that is why we need to combine data and machine learning approaches with sound scientific models and clinical knowledge.
AI-based predictions or recommendations should not be presented like a black box. Healthcare providers need to be able to understand why a certain prediction or recommendation was made.
Obviously, ensuring data security and privacy is equally important here. This should go well beyond regulatory compliance. At Philips, we strongly believe in ‘privacy by design’. This approach aims to embed privacy and data protection controls throughout the entire data lifecycle, from the early design stage to deployment, collection, use and ultimate data disposition and disposal. Privacy should never be an afterthought – it is the very foundation of trust.
Getting AI to work effectively truly is a collaborative effort, and that is what makes it so exciting. More and more, I see data scientists, designers, and clinicians in hospitals working on innovations together. Over the last few years we have also been stepping up our own AI competencies within Philips, building on our long heritage of digital innovation. 60% of our research and development professionals now work in data science and software. We have found that a combination of hiring specialists in these areas, and having them work together with engineers and scientists with deep domain knowledge, is particularly powerful. Together, we can only make these efforts succeed if we center them around people. Coupled with the emergence of connected medical and personal health devices, AI and data science offer amazing capabilities. We have an unprecedented opportunity to help solve one of the biggest challenges in the world: providing high quality care and a good quality of life to all, at an affordable cost. By taking a people-centric approach, we can put advanced technology to a wonderful use ̶ helping the human touch to triumph across the health continuum.
i https://www.advisory.com/daily-briefing/2016/09/08/documentation-time
Innovation Matters delivers news, opinions and features about healthcare, and is focused on the professionals who work within the industry, as well as Philips as a cutting-edge health technology organization. From interviews with industry giants to how-to guides and features powered by Philips data, our goal is to deliver interesting, educational and entertaining content to empower and inspire all those who work in healthcare or related industries.
You are about to visit a Philips global content page
ContinueFormer Chief Technology Officer at Royal Philips from 2016 to 2022