Access to AI doesn’t guarantee better thinking in healthcare

Access to AI doesn’t guarantee better thinking in healthcare - LearnPac Systems UK -
Image by YuriArcursPeopleimages via Envato Elements

Why capability, governance, and metacognitive skills determine whether AI strengthens professional judgement or quietly erodes thinking, accountability, and patient safety

Artificial Intelligence (AI) is now embedded across health and social care. From documentation support and triage tools to analytics and decision support, AI is no longer novel. In many organisations, it is already part of everyday work.

Yet a growing body of evidence suggests something uncomfortable: simply giving people access to AI does not reliably improve thinking, creativity, or decision quality.

That insight sits at the heart of recent research by Shuhua Sun and colleagues, published in the Harvard Business Review (HBR). Their article, Why AI boosts creativity for some employees but not others, shows that the benefits of AI are highly uneven. Some teams produce better ideas. Others stagnate or regress.

The difference is not the technology. It is how people think with it. For health and social care leaders, this finding should ring alarm bells.

In this blog, Dr Richard Dune examines why AI access alone does not improve judgment in health and social care, and how weak thinking habits, governance gaps, and unclear accountability can allow AI to erode professional decision-making rather than strengthen it.

AI does not fail because it is inaccurate

AI fails because of how humans use it. The HBR research identifies metacognition as the critical differentiator. Metacognition is the ability to reflect on, monitor, and refine one’s own thinking.

In practice, this means:

  • Questioning outputs rather than accepting them
  • Understanding limitations and uncertainty
  • Iterating, combining, and reshaping ideas
  • Retaining professional judgement. 

Where these habits are present, AI can enhance creativity and insight. Where they are absent, AI encourages passivity. This is not a new risk.

Over a decade ago, while working on research into Clinical Decision Support Systems in the NHS, I interviewed a consultant cardiologist who articulated the problem clearly. He was not anti-technology. His concern was what he called “pedagogical integrity”.

If systems automatically flagged drug interactions, he worried that junior doctors would stop thinking through pharmacology for themselves. They would become button-pushers. When the system was absent, their reasoning would be weaker, not stronger.

What he described then is precisely the risk we face now, only at much greater scale. When algorithmic output is treated as the final answer, both creativity and professional judgement erode.

What NHS leaders are telling us now

When speaking to senior leaders across NHS organisations, a consistent theme emerges. Very little is being done to prepare users for AI. There is a widespread assumption that:

  • Staff will “work it out
  • Exposure will equal competence
  • Confidence will develop organically. 

In practice, this rarely happens. Instead, we see:

  • Uneven usage
  • Over-reliance on outputs
  • Silent workarounds
  • Growing anxiety about accountability
  • Reluctance to challenge system recommendations. 

In regulated environments, this is not just inefficient. It is dangerous.

Why this is a governance issue, not a technical one

AI is often introduced as a productivity or digital innovation issue. But the risks you hear clinicians and managers describe are not technical failures. They are governance failures. 

If AI is allowed to:

  • Shape decisions without clear oversight
  • Influence care without explicit accountability
  • Replace reflection with automation. 

Then organisations are not innovating. They are ceding judgment. 

In UK healthcare, where statutory duties, professional accountability, and patient safety are non-negotiable, this matters deeply. AI cannot and must not replace people. It is an enabler, not a decision-maker.

The myth of “AI literacy by osmosis

One of the most persistent assumptions in NHS digital programmes is that people will become competent users of AI simply by being exposed to it.

This is the myth of learning by osmosis. The HBR research shows this assumption is false. Without deliberate development of metacognitive habits, AI often:

  • Narrowing thinking rather than broadening it
  • Reinforces first answers
  • Reduces challenge and iteration
  • Creates false confidence. 

In healthcare, false confidence is especially risky.

A tool that sounds authoritative can silence dissent. A system that produces fluent output can discourage questioning. And under time pressure, staff will default to the path of least resistance. This is how AI quietly acquires the final say, not by design, but by neglect.

AI should make us better thinkers, not lazier ones

The question for leaders is not whether AI is being adopted. That has already happened. The question is whether organisations are building the capability to think with AI, rather than deferring to it.

That requires four deliberate actions.

1. Treat AI output as a first draft, always

AI outputs should be framed explicitly as starting points, not conclusions. Staff should be encouraged and expected to ask:

  • What’s missing?
  • What assumptions sit behind this output?
  • How might this differ in another context?
  • What does my professional judgement add?

This mirrors good clinical practice. No care plan, diagnosis, or policy would be accepted without scrutiny. AI outputs deserve the same rigour.

2. Use AI to expand thinking, not outsource it

AI is most valuable when used to:

  • Explore diverse inputs
  • Surface alternatives
  • Reduce administrative burden
  • Synthesise large volumes of information. 

It should not be used to avoid thinking. The aim is capacity-building, not cognitive offloading.

3. Train metacognitive habits deliberately

Critical thinking does not emerge spontaneously. Organisations should build:

  • Reflection prompts into workflows
  • Peer review of AI-supported outputs
  • Structured challenge processes
  • Supervision that includes discussion of AI use. 

Over time, these practices turn passive users into strategic thinkers.

4. Design workflows that reward iteration

If speed is rewarded above all else, AI will be used to produce quick answers. If quality, reflection, and learning are rewarded, AI becomes a collaborator rather than an authority. Workflow design sends powerful signals about what matters.

Why does this matter more in the UK healthcare system

The UK healthcare system operates under intense pressure:

  • Workforce shortages
  • Rising demand
  • Regulatory scrutiny
  • Financial constraint. 

In this context, the temptation to let AI “take the strain” is understandable. But this is precisely where risk multiplies.

If AI:

  • Substitutes for judgment
  • Obscures accountability
  • Weakens professional reasoning. 

Then it undermines the very foundations of safe care. The UK system already struggles with variation. Introducing AI without workforce capability simply amplifies that variation.

AI, accountability, and professional responsibility

One of the most concerning signals from frontline conversations is uncertainty about accountability. When AI influences a decision:

  • Who is responsible if it goes wrong?
  • Is it the clinician?
  • The organisation?
  • The system supplier?

If staff are unclear, they will either:

  • Over-reliance on AI to protect themselves, or
  • Avoid it entirely. 

Neither outcome improves care. Clear governance, role clarity, and escalation routes are essential. AI should support accountable professionals, not blur responsibility.

This is not an argument against AI

It is an argument against complacency AI has enormous potential in health and social care. But potential is not impact. Impact depends on:

  • Capability
  • Culture
  • Governance
  • Learning systems. 

Without these, AI becomes a brittle intervention, impressive in pilots, fragile in practice. Leaving AI to have the final say is a recipe for failure. Allowing it to make us lazy in thinking, lazy in deciding, or lazy in critiquing is worse.

What leaders should be asking now

For NHS boards, executives, and regulated providers, the key questions are no longer technical. They are human and organisational:

  • Are our people trained to challenge AI outputs?
  • Do our systems reinforce reflection or speed alone?
  • Have we built explicit oversight into AI-enabled processes?
  • Are we strengthening professional judgement, or eroding it?

These questions belong squarely in governance discussions, not IT workstreams.

From access to capability: The real transformation

Access to AI is now widespread. Capability is not. The organisations that benefit from AI will not be those that deploy fastest. They will be those who:

  • Invest in workforce thinking skills
  • Embed oversight and accountability
  • Treat AI as a learning system, not an answer machine
  • Align innovation with professional standards. 

Good compliance and good care go hand in hand. AI does not change that. It reinforces it.

Conclusion: From access to capability

AI is now part of everyday healthcare, but access alone does not improve judgment or safety. What matters is whether organisations have the governance, accountability, and thinking skills to use it critically rather than defer to it. Without these foundations, AI risks becoming an unchallenged authority instead of a clinical support. In regulated systems like the NHS, that means greater risk, weaker professional judgement, and blurred responsibility. The organisations that benefit from AI will be those that invest in people, not just technology.

How LearnPac Systems supports safer, smarter AI adoption

At LearnPac Systems, we work with health and social care organisations to strengthen the human and governance foundations that underpin safe innovation, supported by ComplyPlus™, our regulatory compliance management software.

Our services support organisations to:

  • Develop workforce capability through accredited training, CPD, and leadership development
  • Embed governance and accountability across digital and clinical systems
  • Align AI adoption with regulatory requirements, policies, and assurance frameworks
  • Build digital maturity, not just digital tools. 

AI should enhance judgment, not replace it. Innovation should strengthen systems, not bypass them.

If your organisation is adopting AI, or already living with its unintended consequences, we can help you build the capability, governance, and confidence required to use it safely and effectively. Because access to AI doesn’t guarantee better ideas. People do.

Access to AI doesn’t guarantee better thinking in healthcare – LearnPac Systems UK –

Author

Access to AI doesn’t guarantee better thinking in healthcare

Dr Richard Dune

Founder & CEO, LearnPac Systems

Date Published

22/01/2026
Contact Us
News & Insights
e-Learning Courses
Scroll Up