Neil Rowe, senior in-house counsel at THEMIS Clinical Defenceconsiders the potential pitfalls of relying on transcription tech.

Artificial intelligence (AI) is rapidly transforming healthcare delivery, particularly in administrative and documentation tasks.

One area seeing accelerated adoption is AI-enabled ambient scribing – technology that captures and transcribes conversations between clinicians and patients, often in real-time.

While the promise of these tools is substantial, the risks are equally critical, particularly regarding patient safety, data protection, regulatory compliance, and legal liability.

AI transcription tools and how they’re used

AI transcription tools such as Dragon Medical One (Nuance/Microsoft), Amazon Transcribe Medical, Suki Assistant, Tali AI, and Dictate.IT are now embedded in clinical workflows across GP practices, hospitals, and outpatient clinics. These systems often combine speech recognition, natural language processing (NLP), and generative AI to produce structured clinical notes or patient letters from spoken consultations.

On 27 April 2025, NHS England released updated guidance on the deployment of these technologies – “Guidance on the use of AI-Enabled Ambient Scribing Products in Health and Care Settings” – which builds upon earlier initiatives to reduce documentation burden. The guidance supports innovation while reinforcing the principles of safety, transparency, professional oversight, and data privacy. Its principles apply equally to independent private healthcare providers. 

Uses include:

  • Generating outpatient letters automatically post-consultation
  • Real-time transcription and summarisation of consultations
  • Discharge summaries and follow-up documentation
  • Integration with electronic patient record (EPR) systems

The new guidance stresses that ambient scribing tools must not replace clinician judgement, and that outputs should always be clinically reviewed and verified before being finalised.

Benefits of AI transcription in healthcare

  • Efficiency gains: Clinicians save time previously spent on manual note-taking and typing, with overall cost gains.
  • Enhanced patient interaction: Less screen time means more eye contact and better communication.
  • Standardised documentation: Structured, templated output improves record clarity and auditing. Better accuracy enhances patient care. 
  • Real-time availability: Notes can be produced instantly, aiding continuity of care.
  • Staff wellbeing: Reduced administrative workload contributes to lowering burnout rates.
  • Scalability: Supports practices experiencing workforce shortages or high demand.

These benefits support the NHS’s broader digital transformation priorities publicised at the start of this month under the 10 Year Health Plan. However, the 2025 guidance makes clear that these tools are assistive, not autonomous.

AI transcription software

Key risks and how things can go wrong

Transcription errors: Speech-to-text engines, while improving, still struggle with homophones, medical jargon, and accents. There can be background noise, overlapping speech or low-quality audio. A misheard phrase can drastically change the clinical meaning. For example, a GP’s comment “No chest pain today” was transcribed as “Chest pain today” in the letter. The letter was not reviewed before sending, and the error led to an unnecessary cardiology referral.

Clinician over-reliance or user error: Guidance warns against passive acceptance of AI-generated documentation. Over-trusting the output risks embedding serious errors into the patient record. Clinicians must also be wary of operating the product so that it generates output beyond its intended scope. 

Invisibility of errors: AI-generated text often reads fluently and convincingly, what experts call “authoritative but incorrect”. Sometimes AI invents text i.e “hallucinations”. Subtle clinical misstatements may go unnoticed and potentially lead to errors like misdiagnosis unless carefully reviewed.

Loss of clinical context: Ambient scribing tools might fail to capture key nuances such as facial expressions, body language, or hesitation, which are often crucial for safeguarding or differential diagnosis. Clinician observation and recording is critical. 

Confidentiality and cybersecurity risks: Ambient tools often store data temporarily or use cloud-based servers for processing. The 2025 NHS guidance mandates data minimisation, encryption, and compliance with UK GDPR, yet any lapse in supplier controls could result in a breach. For example, a transcription company contracted by a Trust was later found to have used subcontractors abroad, without informing the data controller, raising serious compliance concerns.

Interoperability and record fragmentation: Without tight EPR integration, AI-generated documentation can result in duplicate or inconsistent records, undermining clinical handover and coordination.

Legal and regulatory liability

Responsibility for errors depends on the origin and nature of the failure:

  • Clinicians: Remain personally accountable for checking the accuracy, completeness and proportionality of any record that bears their name. This is reinforced by GMC Good Medical Practice 2024.
  • Healthcare organisations: Have a duty to ensure proper procurement, governance, training, and oversight of AI tools. Failure to do so could give rise to corporate liability.
  • Vendors and developers: May be liable under product liability law, especially if faults arise from defective algorithms or insufficient warning about limitations.
  • Data processors: Third-party service providers who mishandle or unlawfully process patient data could face ICO action, civil claims, and reputational damage.

Claims could arise in negligence, contract, breach of statutory duty, breach of confidence, or under data protection laws, depending on the facts.

Practical risk mitigation strategies

The NHS England 2025 guidance recommends a governance-led approach to safe deployment. Practical steps for clinicians and managers include:

  • Mandatory human review: No AI-generated document should be approved or sent without clinician verification.
  • Ongoing training: Clinicians must be trained to understand how AI tools work, including common pitfalls and workarounds.
  • Robust data governance: Ensure clear DPIAs, data flow maps, and supplier compliance with NHS DTAC (Digital Technology Assessment Criteria).
  • Audit and oversight: Establish internal audit mechanisms to track discrepancies between AI drafts and finalised notes.
  • Incident reporting: Errors should be captured through local risk systems and reported via PSIRF.
  • Clear accountability: Local policies should specify responsibilities at each stage – from capture to verification to storage.
  • Robust contracting: Providers should ensure contracts with suppliers are clear on roles, responsibilities and liabilities. 

The road ahead

The use of AI in clinical documentation is set to expand further, especially as large language models (LLMs) become more embedded in NHS infrastructure. Emerging tools may not only transcribe but triage, code, and even generate clinical summaries or follow-up plans using broader patient data.

NHS England’s 2025 guidance will be reviewed again in six months as the picture quickly evolves and represents a timely intervention to shape this evolution responsibly. Regulators including the CQC, MHRA, and GMC are expected to increase scrutiny of how these tools affect clinical decision-making, patient experience, and service delivery.

The road ahead will likely be bumpy. Only three weeks ago, it was reported by Sky News that NHS England have written to hospitals in June, warning them to stop using AI transcribers from certain suppliers that do not comply with minimum standards, risking clinical standards, patient safety, data protection,  and the wider digital strategy. 

Conclusion

AI-enabled ambient scribing tools offer immense potential to reduce the documentation burden on healthcare staff, improve standardisation, and unlock new efficiencies in clinical care. However, their use introduces complex risks that can impact patient safety, clinician accountability, and institutional liability.

NHS England’s 2025 guidance serves as both a roadmap and a warning: use these tools well, but use them wisely. For healthcare providers, indemnifiers, and legal teams, this is a pivotal opportunity to embed safety, governance, appropriate cover and legal defensibility into how AI transcription is adopted across the entire sector.