A white paper makes recommendations about how to make sure that benefits of AI are apparent to clinicians rather than a liability.
A new white paper warns that the potential benefits of AI to patients may be overlooked if steps are not taken to ensure that the technologies work for the clinicians using them.
In January, as Healthcare Today reported, the government launched a £150 million procurement drive for AI tools to make the NHS more efficient and save money.
The paper – a collaboration between the Centre for Assuring Autonomy at the University of York, the MPS Foundation and the Improvement Academy hosted at the Bradford Institute for Health Research – says the greatest threat to AI uptake in healthcare is the off-switch, if frontline clinicians see the technology as burdensome.
The key concern in the paper is that clinicians risk becoming what it calls liability sinks. They absorb all legal responsibility for AI-influenced decisions, even when the AI system itself may be flawed.
The paper builds on results from the Shared Care AI Role Evaluation (CAIRE) research project, which ran in partnership with the Centre for Assuring Autonomy. This examined the impact of different AI decision-support tools on clinicians – ranging from tools that provide information to those which make direct recommendations to clinicians.
From aspiration to reality
“AI in healthcare is rapidly moving from aspiration to reality, and the sheer pace means we risk ending up with technologies that work more for the developers than clinicians and patients,” said Tom Lawton, a consultant in critical care and anaesthetics at Bradford Teaching Hospitals NHS Trust as well as clinical and AI lead on Shared CAIRE.
The paper makes a number of recommendations to guide a range of clinicians on safe AI use.
Clinicians should regard the input from an AI tool as one part of a wider picture concerning the patient, rather than the most important input into the decision-making process, the report recommends. They should ask for training on any AI tool they are expected to use. This should cover the AI tool’s scope, limitations and decision thresholds, as well as how the model was trained and how it reaches its outputs.
Clinicians should disclose the use of an AI tool with a patient and they should engage with healthcare AI developers, wherever possible, to ensure that AI tools are user-focused and fit for purpose for their intended contexts.