Global healthtech leader Konrad Dobschuetz, the founder director of Leap Advisory & Intelligence, argues that we can never eliminate bias in healthcare. 

Let me get one thing out of the way straight away. There is currently and will not be for the near future true artificial intelligence (AI) comparable to human reasoning in play anywhere on this planet. I might eat my own words but lucky they will not solely be mine but no others than Apple’s very own large language model (LLM) researchers. 

In their paper published in October 2024, they concluded that they “found no evidence of formal reasoning in language models… Their behaviour is better explained by sophisticated pattern matching – so fragile, in fact, that changing names can alter results by ~10%!” 

Thus, we look at this topic regardless of sector with a machine learning mindset, not “intelligence”. 

For this article, we will consider this as Machine Learning (ML) or algorithms, not AI. Besides the hype, we have entered a new era indeed and will see more applications appearing, but true AI is some way off. 

A light bulb surrounded by AI-related symbols, symbolizing the sparks of creativity and innovation in AI research

Is there bias in algorithms?

This clarity is required to start looking at our initial premise. Is there bias in algorithms in healthcare and if there is, how can we avoid it? Do we need some sort of MOT for this and are we running the risk of health inequalities on steroids?

Currently there are only very few, in the main static views on these healthtech interventions, such as the AI Act of the European Union in conjunction with its Medical Device Review mechanism. It is a look-once-and-never-look-again approach.

Furthermore, there is not too much detailed consideration towards bias in AI in the UK worth speaking of, nay a view exists at least within one of the key watchdogs for patient safety, the Medicines and Healthcare products Regulatory Agency (MHRA). According to Laura Squire, medtech regulatory reform lead and chief officer there, bias “is an issue which the agency is acutely aware of from many perspectives. AI has the potential to significantly transform health, but there are also significant risks of it exacerbating health inequalities or worse, if bias in data is not handled thoughtfully.” There are some pieces in the works to look closer into this, albeit localised such as the agencies work with the University in Birmingham called Standing Together. The question remains, is this enough?

The Food and Drug Administration (FDA) in the US does look again but only if significant changes in datasets are at play. Surely that cannot be good enough. In the conversations for this piece, I spoke to several leaders in this field. All people that have been there and done it; Implemented ML in clinical settings or are researching it.

One of the first questions we need to answer is what is algorithmic bias and how can it affect healthcare outcomes. According to an excellent article on Codecademy, it arises “when an algorithm produces systematic and repeatable errors that lead to unfair outcomes, such as privileging one group over another. Algorithmic bias can be initiated through selection bias and then reinforced and perpetuated by other bias types”.

We are working off the premise that any algorithm currently used in healthcare settings is being trained with data of at least 40 years. This data is riddled with bias, human bias that is. Would it not be fair to assume that any risk stratification, bed occupancy forecast, or pathway automation software is full of it too?

Richard Stones, president of UK-based C2-AI (formerly known as Copeland Clinical AI), which has a long track record in developing AI systems that help hospitals and health systems worldwide, has a clear view is on this. 

“We should measure outcomes first as Social Determinants of Health (SDOH) determine your health. Rather than artificially trying to adjust based on postcodes, we need to measure outcomes precisely and then understand the way these differ across SDOH/EDI lines. That highlights issues of bias and inequity effectively. The second element is training algorithms with sufficiently large, representative datasets to eliminate bias,” he says. 

 

“What is the bias in the people using machine-learning solutions in healthcare?”

 

Does more data eliminate bias?

There comes the question if we need more data to avoid bias or should it be less to ensure we do not introduce it in the first place. One answer here lies potentially in the black box issue that algorithms carry. 

Andreas Haimboeck-Tichy, managing director healthcare at Accenture UK who has worked on many related projects over the years with the MHRA and the various UK NHS entities comments that it is important to be clear about “who trains it [the algorithm] and how can its decisions be explained”. This is an interesting take on the topic. Can we avoid bias altogether if we open the lid and make it all open source? Andreas mentioned another interesting question. ‘What is the bias in the people using ML solutions in healthcare?’ Is there indeed a connection where a double whammy of distortion comes in? Imagine a scenario where a well used but misaligned algorithm is providing suggestions in risk stratification and a user feels a manual adjustment is needed. Might we as well go back to pen and paper and the good old days of Multi Disciplinary Teams (MDTs) huddling together in a badly lit room for a decision?

 

Scientist investigating the complexities of neural networks.

A garden with 1,000 flowers

James Teo, professor of neurology, director of data science and AI at Kings College NHS Foundation Trust as well as joint director for data science and artificial intelligence at King’s College Hospital and Guy’s & St Thomas’ Hospital NHS Foundation Trust, opened another can of worms, and rightfully so. 

One of my pet peeves has also made it into the world of clinical algorithms, so-called “pilotitis”. According to him, we currently have a “1,000 flowers blooming scenario in which you need to weed many of them”. It is a tantalising view on the subject as it wraps around the bias issue. 

What do you do with 1,000s of solutions that have been trained with limited sets of data as alluded to previously and riddled with flaws? How do you control their spread and most importantly, their misuse? After all, we are talking about patient safety.

In this exact vein comes the comment from Helga Brogger who works as a senior researcher in AI at Oslo-based risk management firm DNV. She has highlighted that bias in healthcare is a multi-stakeholder issue and that we “need acknowledgement that it [bias] is a great risk”. A board member of the Norwegian Council for Digital Ethics, she has worked on machine learning and algorithms since 2017 across a variety of sectors, not just healthcare. 

Let us close with an eye-opening admission from Teo which is backed up by his experience. To the question of whether we can ever eliminate bias in healthcare, he is quite clear that we cannot as “not all bias is undesirable, and bias is inherent to reality” so there will always need to be “human monitoring“. May that be in how we collect patient data or how we interpret the suggestions and predictions of an algorithm. In the end, it still is us that decide and not a machine. 

And would we not want it to be that way?