Learnpac Systems Logo
Contact Us
e-Learning Courses
Scroll Up

AI in the NHS: From ambition to assurance

AI in the NHS: From Ambition to Assurance - Dr Richard Dune - LearnPac Systems UK -
Image by YuriArcursPeopleimages via Envato Elements

Why trust, governance and readiness, not technology alone, will determine success

Artificial intelligence (AI) is no longer a theoretical prospect for the NHS. It is already embedded in administrative workflows, diagnostics, screening programmes and clinical decision support tools. The government’s ambition to make England’s NHS “the most AI-enabled health system in the world” over the next decade signals a decisive shift from experimentation to scale. Yet ambition alone is insufficient.

As the House of Lords Library’s In Focus paper AI in the NHS (December 2025) makes clear, the real challenge is not whether AI can be deployed, but whether it can be governed, trusted and used safely within one of the world’s most complex health systems. Public confidence is mixed, clinicians remain cautious, and regulators are still adapting to technologies that learn, evolve and operate at scale.

In this blog, Dr Richard Dune calls for a more mature conversation. One that moves beyond innovation rhetoric and focuses on assurance, accountability and system readiness.

The promise: Productivity, capacity and earlier intervention

AI’s potential contribution to the NHS is undeniable. Administrative automation alone offers substantial gains. Recent large-scale trials, including the Microsoft Copilot deployment across NHS organisations, have demonstrated meaningful time savings, releasing clinicians from repetitive documentation and administrative tasks that currently dominate working hours.

Clinical applications are also expanding. AI tools are now supporting:

  • Early cancer detection through imaging analysis
  • Identification of blood disorders from pathology data
  • Risk stratification and triage in high-volume services.

These developments matter because the NHS faces structural pressures that cannot be solved through workforce growth alone: rising demand, an ageing population, constrained finances and widening health inequalities. In this context, AI is increasingly framed not as optional, but as essential infrastructure. However, necessity does not remove responsibility.

Public trust: Conditional, cautious and fragile

The House of Lords paper highlights a critical reality: public support for AI in healthcare is conditional. Surveys consistently show higher acceptance for administrative uses than for clinical decision-making. This distinction is telling.

People are generally comfortable with AI helping to organise appointments or transcribe notes. They are far less comfortable when algorithms influence diagnoses, treatment plans or prioritisation decisions, particularly when those systems are opaque. This reflects three interconnected concerns.

1. Accuracy and safety

AI systems can and do make errors. While human clinicians also err, public tolerance for mistakes is lower when decisions are made by machines, especially when the rationale cannot be clearly explained.

2. Bias and inequality

Evidence that AI performs less well for underrepresented groups, such as poorer diagnostic accuracy for people with darker skin tones, has reinforced fears that AI could entrench, rather than reduce, health inequalities if poorly designed or governed.

3. Loss of human care

Patients and staff worry that AI could make healthcare more transactional, impersonal or mechanistic, particularly where chatbots or automated triage replace human interaction.

Trust, once lost, is difficult to rebuild. This places governance, transparency and engagement at the centre of any credible AI strategy.

The “black box” problem and clinical accountability

One of the most challenging issues raised by the House of Lords briefing is accountability. Many AI systems, particularly those based on deep learning, do not provide clear, interpretable explanations for their outputs. This creates a fundamental tension in clinical environments that rely on professional judgement, ethical responsibility and legal accountability.

Key questions remain unresolved:

  • Who is responsible if an AI-supported decision causes harm?
  • Is liability shared between the clinician, the organisation and the developer?
  • How should clinicians challenge or override AI recommendations in practice?
  • How do we prevent automation bias, where human users defer too readily to algorithmic outputs?

The Law Commission’s exploration of AI legal personality may seem radical, but it reflects the scale of the governance challenge ahead. Current medico-legal frameworks were not designed for adaptive systems that learn over time.

Until these issues are clarified, AI adoption in higher-risk clinical settings will remain cautious, and rightly so.

Data: The foundation and the fault line

AI in healthcare is only as good as the data that underpins it. The NHS holds one of the richest health datasets in the world, creating immense opportunities for innovation but also significant risks.

Public trust in technology companies handling patient data remains low, and high-profile cyber incidents, such as the Synnovis breach, have reinforced concerns around data security and misuse. While anonymisation and pseudonymisation offer safeguards, the risk of re-identification persists if governance is weak.

Regulators, therefore, face a delicate balancing act:

  • Enabling access to high-quality data for research, development and validation
  • While maintaining robust protections, transparency and public consent.

Without clear, enforceable standards for data governance, AI risks losing its social licence to operate within the NHS.

Regulation: From static approval to continuous assurance

Perhaps the most important signal in the House of Lords paper is the recognition that traditional regulatory models are no longer sufficient.

AI systems are not static medical devices. They evolve, retrain and change behaviour as new data is introduced. A one-off approval process cannot adequately manage this risk.

The MHRA-led National Commission into the Regulation of AI in Healthcare, due to report in 2026, represents a critical opportunity to rethink assurance. Early indications suggest a shift towards:

  • Lifecycle-based regulation
  • Continuous monitoring and post-deployment evaluation
  • Greater emphasis on real-world performance, not just pre-market testing.

This approach aligns with WHO guidance and international best practice. However, it will require closer coordination between regulators, commissioners, providers and system leaders.

Workforce readiness: The missing link

One area that receives less attention but is arguably decisive is workforce readiness. Freeing up time through automation does not automatically improve care. Benefits are only realised if:

  • Staff understand how AI works and where its limitations lie
  • Organisations invest in AI literacy and digital competence
  • Professional standards evolve to reflect AI-supported practice.

Without this, new risks emerge: over-reliance on algorithms, under-challenging of outputs, and disengagement from clinical reasoning.

AI, therefore, needs to be treated not just as a technology project, but as a workforce and governance transformation programme.

From innovation to infrastructure: What must change

If AI is to move from promising pilots to trusted infrastructure, several shifts are required:

  • Governance before scale - AI must be embedded within organisational governance frameworks, not bolted on. This includes clear accountability, risk management, auditability and board-level oversight.
  • Ethics with enforcement - Ethical principles must translate into procurement standards, assurance processes and inspection criteria, not remain voluntary guidance.
  • Transparency by design - Explainability, audit trails and performance reporting should be treated as core safety features, not optional extras.
  • Continuous assurance - Regulators and providers must adopt lifecycle oversight models that recognise AI’s evolving nature.
  • Public and professional engagement - Trust is built through openness, dialogue and inclusion, particularly with communities most at risk of exclusion or bias.

Conclusion: Readiness is the real differentiator

AI has the potential to transform the NHS, but only if its deployment is shaped by trust, governance and readiness, rather than speed alone.

The House of Lords Library briefing usefully reframes the national conversation. The question is no longer whether AI belongs in healthcare, but whether the system is ready to use it responsibly.

Those organisations that succeed will not be those that adopt AI fastest, but those that embed it most thoughtfully, within robust governance structures, skilled workforces and transparent assurance frameworks.

In the next phase of NHS digital transformation, assurance will matter as much as innovation. The future of AI in healthcare will be determined not by what the technology can do, but by how wisely we choose to govern it.

Take the next step from AI ambition to assurance

With LearnPac Systems, organisations can move from digital ambition to regulatory assurance, helping leaders embed governance, competence, and confidence in the use of emerging technologies across health and social care. ComplyPlus™ integrates learning management, training oversight, policies, procedures, and compliance evidence into a single, connected environment, supporting inspection readiness, accountability, and effective governance as new technologies are introduced.

References

House of Lords Library (2025) - AI in the NHS (UK Parliament)

Author

Author avatar

Dr Richard Dune

Founder & CEO, LearnPac Systems

Published

18/12/2025