Hype and hurdles: Navigating the path of AI in healthcare

Post Main IMage

In 2023, dictionary publisher Collins named ‘AI’ the most notable word of the year, highlighting how it has accelerated at such a fast pace and dominated conversations in recent times. The fact that the acronym alone made the list is telling of its impact.

AI is making waves across nearly every industry, evoking a mix of excitement and apprehension. In healthcare, where precision and trust are non-negotiable, these emotions are especially pronounced. The potential for AI to transform patient outcomes is immense, but the path to realising this potential must be trodden with care. This article explores some of the hurdles that tech companies face in their quest to responsibly and effectively integrate AI into clinical practice. 

Addressing inbuilt biases 

AI systems are only as good as the data we feed them. If the data lacks diversity, the tech may produce inaccurate or harmful recommendations for certain groups, exacerbating existing health disparities. This risk isn’t limited to AI alone. For example, the COVID-19 pandemic highlighted that pulse oximeters, which have been widely used in healthcare for decades, may not be as accurate for patients with darker skin tones. 

The Medicines and Healthcare products Regulatory Agency (MHRA) is taking action to tackle ethnic and other biases in medical devices and now requires all new applications, including those for AI-enabled devices, to describe how they will address bias. Developing bias-free AI technology that is equitable for all relies on incorporating patient voices throughout the product’s lifecycle and on validating the technology across diverse demographics.

Gaining patient and staff buy-in 

Although computers have been a staple in hospitals for decades, the rapid advancement and complexity of AI technology can introduce uncertainties for patients and professionals alike. Patients may have concerns about privacy and the perceived impersonal nature of AI-driven care. Meanwhile, staff might worry that AI will replace them, undermine their clinical expertise or disrupt their established workflows. 

Clear and open communication is crucial to ensure that patients understand how, when and why AI is being used in their care. It’s also essential to reassure patients and staff that AI is designed to augment, not replace, human capabilities: by automating routine tasks, AI can free up clinicians to focus on more complex and nuanced aspects of patient care. As noted by Dr Fei-Fei Li, Co-director of the Stanford Institute for Human-Centered AI, "The future of AI is not about man versus machine, but rather man with machine." 

Training programmes can help staff become comfortable and proficient with AI devices and involving staff in the development of new protocols ensures their knowledge shapes the integration of the technology, fostering a sense of ownership and acceptance. 

Generating evidence in the real world

Evidence generation for AI in healthcare is a catch-22. Healthcare providers are hesitant to adopt AI tools without solid evidence they work while tech companies find it difficult to gather the necessary evidence without widespread adoption. And adoption isn’t the only barrier to robust research: randomised controlled trials are typically considered the gold standard but their slow and costly nature can’t always keep pace with the speed of AI innovation. 

To bridge the gap, AI companies can leverage peer-to-peer advocacy and anecdotal evidence. Positive experiences shared by early adopters can often reveal benefits that go beyond initial hypotheses. A combination of rigorous evidence and practical, on-the-ground insights can provide a comprehensive understanding of AI's true impact in healthcare settings. 

Looking ahead

The possibilities of AI in healthcare are tremendously exciting, but its success requires tackling a wide range of challenges. As technology advances, so must our efforts to use it responsibly, alongside our commitment to ensuring it enhances clinical practice and drives meaningful change for patients. 

“AI will change the world, but the nature of these changes must be determined by society. Creating visionary, but also safe and equitable technology is right at the centre of Oxehealth's platform development.” Julian Dixon, Director of Engineering