How the AI race, political pressure, and productivity promises distract from the real work of building governance, capability, and safe adoption in health systems
Across healthcare policy, regulation, and technology commentary, one phrase keeps resurfacing: ‘fast access to AI’.
The pressure is not subtle. Governments are competing to position themselves as leaders in artificial intelligence. Health systems are framed as prime beneficiaries of this race. Even the UK Prime Minister, Keir Starmer, has spoken openly about the need for healthcare to adopt AI at pace as part of wider productivity and reform ambitions.
In that context, speed begins to feel like a virtue. But it is the wrong starting point. The real challenge in health and social care is not whether organisations can access AI. It is whether systems are ready to safely absorb it, use it well, and govern it responsibly in real-world conditions. Healthcare rarely fails at the point of invention. It fails at the point of implementation.
In this blog, Dr Richard Dune argues that the real risk in healthcare AI is not lack of access, but lack of readiness. He shows why governance, capability, and system maturity matter more than speed in achieving safe, meaningful impact.
The AI race and the pressure to move fast
The global AI race matters. Countries worry about falling behind economically. Policymakers worry about missing opportunities to modernise public services. Health leaders worry about workforce shortages, rising demand, and financial constraints.
Against that backdrop, calls for faster deployment of AI in healthcare are understandable. AI is frequently presented as a lever for productivity, efficiency, and transformation. But urgency carries risk.
When speed becomes the organising principle, it quietly reframes what success looks like. Deployment replaces outcomes. Availability substitutes for value. Systems are judged by how quickly tools are introduced, not by whether they actually improve care, safety, or professional judgment. This is how innovation theatre begins.
Access is not the bottleneck
Most health and care organisations already have access to AI in some form. Clinical decision support tools. Triage algorithms. Documentation and transcription systems. Workforce scheduling. Risk stratification and population analytics. These technologies are no longer experimental.
What varies dramatically is not availability, but impact. Some teams report reduced administrative burden or improved insight. Others experience increased workload, confusion about accountability, or quiet abandonment once the initial enthusiasm fades. The same tool can reduce friction in one setting and increase risk in another. That variation tells us something important: Access is not the limiting factor.
Diffusion is a systems challenge, not a technology one
The diffusion of innovations theory has long shown that adoption depends on far more than technical performance. Innovations spread when they are:
- Compatible with existing practice
- Trusted by users
- Supported by leadership
- Reinforced by organisational culture.
Healthcare struggles with all four. AI is often introduced into systems already under strain:
- Limited time for training and reflection
- Fragmented digital infrastructure
- Unclear ownership when outputs are contested
- Overlapping regulatory and performance pressures.
In that environment, faster deployment does not accelerate the benefit. It amplifies variation. Early adopters may make tools work through personal expertise and informal workarounds. But when scaled, those same tools expose gaps in governance, capability, and integration. This is where many promising AI initiatives stall.
The hidden cost of speed: Implementation debt
Every AI system creates implementation debt. That debt shows up as:
- Training and retraining requirements
- Workflow disruption and redesign
- New safety, assurance, and liability questions
- Monitoring for bias, drift, and misuse
- Cognitive load on already stretched staff.
When innovation moves faster than organisations can service that debt, the cost is paid elsewhere, most often by frontline teams. This is why promised productivity gains from AI are so often assumed rather than realised. The system absorbs the burden, not the benefit.
From a governance perspective, this is a predictable failure mode. From an operational perspective, it is deeply destabilising.
Why regulation keeps circling back to speed
Regulatory debates frequently frame the challenge as a balancing act between innovation and safety. Faster access is positioned as essential to relevance and competitiveness.
The current Call for Evidence led by the Medicines and Healthcare products Regulatory Agency reflects this tension. The intent is sound: enable innovation without compromising patient safety.
But speed is not neutral. It encodes priorities. When speed dominates regulatory framing, other questions quietly fade into the background:
- Who is accountable when AI influences decisions?
- How are unintended consequences detected and addressed?
- What happens when performance varies across populations or services?
- How is learning captured once systems are live?
Regulation is not a brake on innovation. Done well, it is a stability mechanism that creates the conditions for useful innovations to move beyond pilots and survive contact with reality.
Readiness is the missing conversation
If fast access is the wrong question, what should leaders be asking instead? A better starting point is readiness. That means asking:
- Do we have the workforce capability to use AI critically, not passively?
- Is this tool integrated into existing systems or layered on top of them?
- Are roles and responsibilities clear when outputs are disputed or wrong?
- How will performance be monitored after deployment, not just at approval?
- What behaviours does this system encourage under pressure?
These are not technical questions. They are governance and leadership questions. They determine whether AI becomes a support, a distraction, or a risk multiplier.
Digital maturity: The missing middle
One of the most consistent problems LearnPac Systems sees across regulated environments is a mismatch between digital ambition and digital maturity. Organisations are asked to adopt advanced technologies while still struggling with:
- Inconsistent data quality
- Fragmented systems
- Unclear document control
- Weak assurance processes
- Variable workforce capability.
In that context, AI does not fix structural issues. It exposes them.
Digital maturity is not about buying more tools. It is about:
- Integrated systems
- Clear governance
- Competent, confident users
- Continuous learning and improvement.
Without this foundation, AI adoption becomes fragile and risky.
Workforce capability matters more than tool capability
AI does not replace professional judgement. It reshapes it. Whether that reshaping is helpful or harmful depends on how well staff understand:
- What the system does
- What it does not do
- When to trust it
- When to challenge it.
This requires training, supervision, and a culture that rewards reflection rather than blind compliance. In regulated sectors, capability gaps quickly become compliance risks. And compliance failures quickly become safety issues.
Speed helps headlines. Readiness helps outcomes.
The political and economic pressure to adopt AI quickly is real. But healthcare history is full of innovations that promised transformation and delivered uneven results because adoption raced ahead of capability.
AI is not different in kind, only in scale and opacity. Fast access may satisfy the logic of the AI race. Sustainable value depends on slower, less visible work:
- Building trust
- Strengthening governance
- Integrating systems
- Developing workforce capability
- Establishing feedback and learning loops.
That work does not make headlines. But it is where real improvement happens.
A better question for leaders
The question is not: “How quickly can we deploy AI?”. It is: “What needs to be true in our system for this to reduce burden, improve judgment, and make care safer?”
Access is easy. Absorption is hard. And in health and social care, that difference matters.
Conclusion: Readiness over speed
The AI race in healthcare is often framed as a test of speed, but speed alone does not deliver safer care, better judgment, or sustainable impact. Access to AI is no longer the challenge. Readiness is. Without strong governance, capable workforces, and mature systems, faster adoption simply amplifies risk. In health and social care, meaningful innovation happens when systems are prepared to absorb change, not when they are pushed to move faster than they can safely manage.
How LearnPac Systems can help build system readiness
At LearnPac Systems, we work with health and social care organisations to move beyond innovation theatre and build the system readiness required for safe, scalable adoption.
Our core services support organisations to:
- Strengthen governance and assurance through integrated compliance systems
- Develop workforce capability via accredited learning, CPD, and leadership development
- Improve digital maturity by aligning systems, data, and processes
- Embed continuous oversight across policies, procedures, training, and risk management.
ComplyPlus™ supports this system-led approach by providing a single regulatory compliance management platform that brings training, policies, governance, risk, and assurance together in one inspection-ready environment.
Whether organisations are considering AI adoption or struggling with the unintended consequences of rapid digital change, the solution is rarely another tool. It is a stronger system. Innovation works when governance, capability, and systems mature together.
If you want to explore how LearnPac Systems can support your organisation to scale innovation safely in regulated environments, we’d welcome the conversation.
