After the uncertainty created in the tech market by the launch of DeepSeek, Alex Fairweather examines what this means for AI in healthcare. 

“As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” noted Satya Nadella. Microsoft’s chief executive was right. Just when we thought the hype cycle of Generative AI was peaking, the launch of DeepSeek R1 has pumped it back through the roof. Not only that, but this seems to be a legitimate inflexion point in its potential for adoption and commoditisation. 

To cut a long and highly technical story short, DeepSeek R1 was developed using existing large language models (LLMs) to train a smaller, more agile model, that uses a more human-like way of thinking. To quote Google Gemini: “‘Inference’ refers to the act of drawing conclusions or making predictions based on available data, while ‘reasoning’ is the broader process of analysing information, identifying patterns, and applying logical rules to reach those conclusions.” What it is saying is that inference is the outcome of reasoning, while the AI system uses its derived knowledge to make decisions based on the new data it has.

In this photo illustration, the DeepSeek app is displayed on an iPhone screen on January 29, 2025 in New Delhi, India.

The advantages of a reasoning model 

DeepSeek R1 uses a reasoning model that produces a more conversational thought process. Its code is open source, and its performance-relative-to-cost claims to be higher than existing applications such as Chat GPT. The wow factor isn’t necessarily in the performance though, but more in what it means for the next stage of AI development. 

A key takeaway is that DeepSeek has been created at a reduced cost (or so it is claimed!) which reduces barriers-to-entry to building new, bespoke LLMs, using existing larger models, requiring less compute, energy and cost. This opens the possibility of application development using office-based hardware rather than a reliance on large data centres and IT hardware infrastructure comprising thousands of chips. This concept isn’t new, but the launch has shown that it could be achievable much sooner than expected. 

I’ve read some comments that compare this to the advent of modern PCs replacing mainframe computers in the 1980s and 1990s. If this is true, as happened back then, it won’t happen at once but it will eventually cause total disruption and transform the industry. 

The devil is in the detail with any online LLM. By default, anything uploaded and produced is usually given to the company that operates it. In the case of DeepSeek, you are giving whatever data you input to a Chinese company and by extension the Chinese government. Generally, it’s against most companies’ IT policies to use any LLM when connected to company hardware or networks because of the risk of IP theft. Similarly, no one in their right mind should upload company documents or patient notes to an LLM unless it’s strictly used offline. This goes for all applications – not just DeepSeek – so in short, personal use only, and never on work devices!

When I tested DeepSeek myself, I found the responses to be easier to read, subjectively more pleasant and it was generally a better user experience due to the more conversational nature of the model.  

The model seems to provide a deeper response to questions. When asked about exercise-induced asthma treatment, for example, both DeepSeek and ChatGPT suggested broader holistic options, though both missed some information the other didn’t. The former created a holistic summary with a suggested approach whereas the latter suggested three best options. 

As previously mentioned, the real upside is DeepSeek’s efficiency – it requires lower energy and computing demands. The fact the source code is freely available means that developers can create a local (offline) model as long as it meets your country’s data security requirements. This is a real bonus and the area of the biggest potential in healthcare Generative AI.  

In summary, the hype isn’t about what it can do now, but what it will change in the future, across the ecosystem of Generative AI.

AI Tools like ChatGPT DeepSeek and Gemini

What’s the bigger picture for healthcare? 

As I’ve written previously and has been much-discussed, we can’t be over-reliant on artificial intelligence. AI is a great enabler to augment human performance and productivity, but it is not a replacement. 

Interpreting complex results and incorporating qualitative feedback may have taken a step forward over the past week, but primarily this is where humans shine compared to AI. None of the live testing of all LLMs comes close to a highly trained and intuitive expert in the field. 

AI can help healthcare professionals connect siloed information to provide holistic care. Reasoning models such as DeepSeek appear to facilitate this better than inferencing models, combined with the improved economics that this seems to show, and the potential for AI applications to become more bespoke, could empower faster adoption across the life sciences through greater accessibility and lower barriers to entry. 

Thomas Wolf, open-source advocate and co-founder and chief science officer of Hugging Face, has said: “In the internet revolution, we’re moving from building websites as the main business to actually building internet-native companies – so, the Airbnb of AI, the Stripe of AI, they are not about the model. They are about the system and how you make the model useful for tasks.” 

This nicely summarises my take; DeepSeek has helped lower barriers to entry and opened a door for new AI applications to be produced. It is growing the ecosystem necessary for mass adoption. 

It’s still very early days, but the signs are promising that we can expect faster democratisation and adoption of bespoke AI applications sooner than we believed possible a week ago.