Neil Daly, chief executive and founder of Skin Analytics, argues that AI isn’t a threat. It is the infrastructure for faster and safer care.
I’ve spent more than a decade working with the NHS to bring safe, effective AI into the hands of dermatology teams. In that time, I’ve watched waiting lists grow, despite dermatology teams’ best efforts, resulting in longer wait times and the heartbreaking stories of those patients for whom the wait was too long.
We have the tools to solve this problem at hand. In use within parts of the NHS since 2021 and now having seen more than 225,000 patients and found 15,500 cancers. But despite this, these tools are still seeing only 10% of urgent skin cancer referrals in England.
Skin cancer is the most common cancer-related GP referral in the UK. But fewer than one in ten of those patients have high-risk skin cancer. This mismatch overwhelms dermatology departments, leaving patients with aggressive melanomas or severe skin conditions stuck in the same queues as those with benign concerns.
This isn’t just a problem. It’s a systemic failure.
We know the stats: melanoma survival rates are over 95% when caught early, yet they fall to 10% when caught late. So why are we still accepting delays? It’s not a matter of resourcing. It’s a matter of decision-making.
We have an outdated perception of risk.
We overlook the real risk posed by the thousands of high-risk cancers still waiting to be detected by dermatologists. Yet we fixate on the perceived risk of using AI, even though its performance can be assessed across many times more cases than a dermatologist sees in a lifetime.
It’s understandable that new technologies like AI are met with hesitation. But these tools are no longer experimental and haven’t been for a number of years. They are regulated, often recommended by national public healthcare bodies, deployed in NHS settings, and changing lives.
Why AI isn’t the risk
The perception persists that AI in cancer diagnosis is still new and needs more research. But in reality, AI-led tools have now cleared the regulatory bar. They are being deployed safely within NHS pathways and have already proven their value.
In skin cancer pathways where our AI is deployed and embedded, we’ve reduced urgent face-to-face appointments for suspected skin cancer by up to 95%. That means faster answers, fewer anxious waits, and earlier treatment for those who need it.
But this didn’t happen overnight. Turning complex mathematical models into a regulated medical device and deploying it successfully into real-world healthcare pathways is a gargantuan, interdisciplinary project.
We are acutely aware that our technology is focused on cancer and needs to work exceptionally well. We took our time and spent the first eight years on research and development, starting in 2012. Building safe clinical AI is a long journey that encompasses clinical validation, external scrutiny, support from NHS England and the UK government, in addition to the regulatory commitment required to secure and maintain the only AI cleared to make clinical decisions in the cancer space.
Still, the biggest challenge we face isn’t performance – it’s adoption.
The NHS’s 10 Year Health Plan for cancer care sets ambitious targets for faster diagnosis and improved outcomes. But without scalable, proven tools like AI-led diagnostics, these goals risk becoming little more than a wish list. Technology must be treated not as an enhancement, but as the infrastructure that makes these ambitions feasible.
A missed or delayed diagnosis isn’t just a statistic; it can be the difference between whether a patient survives the disease or not. We talk about innovation as a nice-to-have. But when that innovation can be a matter of life or death, ensuring patients have access is not negotiable.
Delayed adoption based on an incorrect perception of risk is unacceptable.
Patients are ready
Given the option, most patients would prefer a safe AI assessment in 48 hours over waiting 12 weeks to see a consultant dermatologist. Once they experience AI-led diagnostics, the vast majority say they would choose it again.
They’re not afraid of technology; they’re tired of waiting. When the process is transparent, regulated, and clinically robust, patients trust it. We need to start trusting it, too and reframe how we determine risk.
We hold AI to the highest standards. And we have to. But let’s also hold the current system to the same standard, one where thousands of patients face delays not because of complexity, but because of inertia.
If we’re serious about delivering the NHS’s 10 Year Health Plan, we need to stop treating AI as the saviour of the future and start treating it as the infrastructure for faster, fairer, safer care, right now.
In that context, AI isn’t the radical option. It’s the responsible one.
The safest thing we can do for patients isn’t to keep them waiting for a consultant. It’s to get them seen faster and through trusted, regulated, and clinically sound technologies.
Together, the combination of regulated AI and clinical expertise means we can clear wait lists over the next few years.