Louise Kitchingham, senior vice-president at Clarity Global, explains who’s most at risk from misinformation, why it spreads and what we need to do next.
From viral TikToks to political podiums, misleading health claims are circulating with unprecedented speed. In today’s UK, a blend of low health literacy, social media incentives and polarised politics has created the ideal environment for medical misinformation to flourish. This is more than just a digital nuisance. It’s a real-world risk which undermines trust, public health, and can ultimately cost lives.
More information, less understanding
Despite the end of the COVID pandemic and lockdown, misinformation hasn’t faded, it’s adapted. Ofcom figures indicate that four in ten UK adults encounter misinformation or deepfakes every month, with a quarter exposed to health-related falsehoods. Making matters worse, only a minority feel confident in identifying AI-generated content and as the House of Lords recently highlighted, the UK continues to fall behind internationally on media literacy.
At the centre of the issue is health literacy specifically. Nearly half of working-age adults in England struggle with written health information, and the challenge is even greater when numbers are involved. For younger generations where social media is now the main channel for news and advice, the result is a population bombarded with opinions, personal stories and sensational claims, but without the tools to separate fact from fiction.
Why misinformation spreads so easily
The rapid spread of misinformation is not accidental. Social media algorithms and influencer culture reward engagement over accuracy. False claims, especially those packaged as personal testimony or so-called natural solutions, can go global in minutes. Recent headlines show how quickly unfounded claims can gain traction. For example, when the White House linked paracetamol in pregnancy to autism, UK health authorities rushed to dispel the myth, but not before it had already spread widely online.
On the demand side, low health and digital literacy, cognitive overload, and a natural human bias towards confident, simple answers all play a role. When political figures or celebrities amplify these messages (sometimes for attention, sometimes for ideology), the confusion deepens. The UK’s media literacy deficit is being exploited by those who understand how to shape narratives and influence behaviour.
The consequences are visible and growing. As vaccine misinformation pushes on, vaccine uptake remains below target in several parts of England, and outbreaks like the resurgence of measles show that trust in expertise has not fully translated into protective behaviour.
Regulation is still playing catch up
The regulatory response is evolving, but recent developments show it is lagging further behind than many hoped. While the UK’s Online Safety Act (OSA) has entered the enforcement stage and Ofcom’s codes are now in force, the law only tackles illegal medical misinformation, leaving vast grey areas. Lawful but harmful content, the monetisation of questionable wellness cures, and the viral spread of unproven claims remain largely unaddressed.
The government and Ofcom’s latest responses to the Science, Innovation and Technology Committee’s report confirm this patchwork approach. Despite the committee urging a tougher stance on algorithmic amplification and the risks of generative AI, the government declined to extend legislation specifically to cover generative AI platforms, even as Ofcom itself admitted the legal status of such technology is “not entirely clear”.
Moreover, while the government acknowledged the role of digital advertising in incentivising the spread of harmful content, it stopped short of any real commitment to change, stating only that it would keep the issue “under review”.
In practice, this means that both the technological drivers and financial incentives for medical misinformation remain largely unchecked, and the public continues to be exposed to sophisticated new threats.

Turning the tide on misinformation
It’s clear that solutions must move as quickly as the misinformation itself. Government and policy must focus on bridging the divide between illegal and harmful content, providing clearer guidance for platforms, and investing robustly in enforcement. Beyond reactive measures, Big Tech must embed transparency and friction into their systems, actively demoting dubious claims and safeguarding users from unverified advice.
The NHS and public bodies can help close the health literacy gap by using plain, accessible language and by putting trusted clinicians directly onto the platforms where people are searching for answers.
Addressing the problem doesn’t stop at regulation. There is a crucial role for schools, families, and communities. Critical thinking and double-checking sources should become the norm in classrooms and at home. The public needs practical tools to evaluate information, such as checking the credibility of sources, looking for scientific consensus, and being cautious of sensational “cure-all” claims or anecdotal evidence. If a claim can’t be explained in simple terms or traced to solid evidence, it’s worth pausing before sharing or acting on it.
The stakes for trust and public health
Trust in UK doctors remains high, but this trust is under pressure. Experts need to be heard, their guidance must be easy to understand, and their voices should be present in the same digital spaces where misinformation spreads. The next public health crisis may well be one of trust and truth, rather than disease.
The stakes are high, but there is definitely an opportunity to build a healthier, better-informed UK. This demands coordinated action from government, tech platforms, the NHS, educators, and parents. We all have a role to play: scrutinise what you see, share responsibly, and actively support initiatives that champion health literacy and truth online.



