Learnpac Systems Logo
Contact Us
e-Learning Courses
Scroll Up

AI bubble or transformation? What it means for health leaders

AI Bubble or Transformation? What It Means for Health Leaders - Dr Richard Dune - LearnPac Systems UK -
Image by Pressmaster via Envato Elements

Why health and social care leaders must strengthen governance, workforce readiness and digital resilience as AI growth accelerates and investment cycles become unstable

Artificial intelligence has risen faster than almost any technology in recent memory. Over the past two years alone, investment has soared, valuations have surged, and nations have scrambled to position themselves as global AI leaders.

But in a candid interview this week with the BBC, Google and Alphabet CEO Sundar Pichai issued a warning that cuts through the hype:

"No company is going to be immune if the AI bubble bursts, including us."

Coming from one of the world's most influential technology leaders, this is not a throwaway comment. It is a call to pause, reflect, and prepare, particularly for health and social care leaders, as AI is now becoming integral to clinical practice, safeguarding, risk management, workforce planning, governance, and operational efficiency.

In this blog, we explore what Pichai's remarks mean for the NHS, local authorities, social care providers, and the wider ecosystem of health and care organisations that rely on digital systems, regulated workflows and statutory responsibilities.

The AI boom, and signs of a bubble

Pichai described the current AI boom as "an extraordinary moment, even by Silicon Valley standards", but he also warned that irrationality is creeping into the investment cycle.

He compared today's environment to the dot-com bubble, when investment surged far ahead of capability and sustainability.

Just as the internet ultimately proved transformative, despite the crash of early companies, Pichai expects AI to follow a similar trajectory. But he cautions that organisations should expect volatility, consolidation and turbulence as the market corrects.

Why this matters for health and social care

Across the sector, AI is being deployed for:

  • Clinical decision support
  • Diagnostics and imaging
  • Safeguarding triage and risk detection
  • Workforce rostering
  • Predictive analytics
  • Case management
  • Medication support
  • Virtual wards and remote monitoring.

If the AI bubble wobbles or bursts, we could see:

  • Vendor instability - Many AI companies are start-ups with uncertain finances
  • Orphaned technologies - Discontinued support for AI modules embedded in EHRs or care management systems
  • Disruption to clinical workflows - Sudden removal or downgrading of tools relied upon for triage, risk scoring or prediction
  • Procurement risk - Organisations may face sunk costs, integration failures or compliance issues.

Digital transformation in care settings must therefore be underpinned by robust governance, business continuity planning and multi-vendor strategies, not assumptions of AI-sector stability.

Don't blindly trust AI : Safety, accuracy and accountability

Another key warning from Pichai was unequivocal:

"AI models, even state-of-the-art ones, should not be blindly trusted."

He noted that large models still produce errors, hallucinations and misleading outputs. Even frontier-level systems must be used alongside other research options, not as stand-alone sources of truth.

For patient safety, this is critical

AI tools now influence decisions that directly impact lives:

  • Clinical risk scores
  • Safeguarding escalations
  • Medication suggestions
  • Mental health triage
  • Falls and deterioration predictions
  • Workforce and caseload modelling.

If inaccurate, opaque or poorly governed, AI can cause harm at scale.

Governance essentials for safe adoption

Health and social care leaders must ensure:

  • Human-in-the-loop by design - AI can support decisions, but cannot replace professional accountability
  • Transparent, traceable outputs - Opaque "black box" recommendations are incompatible with regulated environments
  • Clear escalation pathways - Staff must know when to override or report concerns
  • Digital competency training - Frontline teams need confidence to use, verify and challenge AI-assisted outputs
  • Regulatory alignment - NICE, MHRA (SaMD), ICO, CQC, and local safeguarding boards will increasingly expect structured evidence of safe AI use.

AI can improve safety, but only if embedded within strong governance frameworks, not treated as infallible.

The hidden challenge: AI's immense energy demands

Pichai also highlighted an often-overlooked issue: AI consumes huge amounts of energy.

AI currently accounts for 1.5% of global electricity use , and demand is accelerating rapidly as new data centres are built and larger models are trained.

Pichai noted that this energy burden is already impacting Alphabet's own climate targets, pushing back on ambitions to achieve net zero.

Implications for health and social care

AI's energy demands may seem abstract, but the consequences are very real:

  • Digital infrastructure strain - NHS organisations, ICSs and councils will increasingly require stronger energy and network capacity to run advanced AI systems at scale
  • Cost pressures - Running large models is expensive, and public budgets are already stretched
  • Sustainability targets - Digital transformation must now be factored into carbon reduction strategies and Green Plans
  • Procurement decisions - Health and care organisations must begin assessing the energy efficiency and sustainability credentials of AI suppliers.

In short, digital and sustainability agendas are converging, and leaders must plan accordingly.

Workforce transformation: The profound impact of AI on jobs

Pichai described AI as:

“‘The most profound technology’ humanity has ever worked on, one that will ‘evolve and transition certain jobs’ and require people to adapt.”

He emphasised that every profession, from teaching to medicine to social work, will remain essential, but those who learn to use AI tools effectively "will do better".

Workforce implications for health and social care

This shift is already visible:

New roles emerging:

  • AI safety specialists
  • Prompt engineers
  • Data quality officers
  • Algorithm auditors
  • Digital ward coordinators
  • Augmented clinicians.

Existing roles changing:

  • Administrators
  • Radiographers
  • Social workers
  • Pharmacists
  • HR teams
  • Care coordinators.

Skill gaps are widening. Without structured, national-level CPD and training, staff risk falling behind rapidly evolving technologies.

What the health and social care sector needs now

To prepare the workforce, ICSs, NHS Trusts, social care providers and training bodies must develop:

  • AI competency frameworks
  • Digital literacy standards
  • Mandatory AI safety training
  • CPD pathways for augmented roles
  • Assurance processes for AI-influenced decisions
  • Shared learning across health, social care and education.

The social care workforce, often overlooked in national digital policy, must be included in this training evolution, or risk falling dramatically behind.

At LearnPac Systems, this aligns directly with our mission: "to equip organisations with evidence-informed training, governance frameworks and digital readiness tools that support safe, confident AI adoption."

UK context: Investment, opportunity and responsibility

Pichai reaffirmed Alphabet's commitment to the UK, including:

  • £5bn investment in AI infrastructure
  • Expansion of DeepMind
  • Commitment to train AI models in the UK, a major strategic shift.

This aligns with the UK's ambition to become the world's "third AI superpower" after the US and China.

What this means for UK health and social care

This investment could accelerate:

  • AI-enabled clinical trials
  • Diagnostic innovation
  • Mental health analysis tools
  • Genomic medicine
  • New drug discovery pipelines
  • Digital transformation across the NHS
  • Integrated data platforms for ICSs.

But it also increases responsibility around:

  • Data governance
  • Privacy compliance
  • Algorithmic fairness
  • National digital infrastructure
  • Equitable access across regions
  • Environmental sustainability.

AI innovation is welcome, but must be delivered fairly, safely and sustainably, not only in world-class centres but across the entire system, including social care and community settings.

What health and social care leaders should do next

Based on Pichai's insights and the sector's trajectory, organisations should prioritise the following:

  • Strengthen governance frameworks – Embed AI oversight in risk management, quality assurance and clinical governance processes.
  • Diversify suppliers and avoid over-dependence – Market volatility means resilience depends on avoiding single-vendor strategies.
  • Build system-wide digital literacy – Make AI competence mainstream across roles and sectors.
  • Prepare for evolving regulation – MHRA, NICE, CQC, ICO, and safeguarding boards will tighten expectations as AI matures.
  • Monitor sustainability and operational costs – AI adoption must align with energy strategies and resource constraints.
  • Prioritise public trust and service user voice – Transparency, human oversight and clear accountability are essential for legitimacy.

Final thought: A moment of promise, and a moment of caution

Sundar Pichai's remarks do not signal pessimism. Rather, they remind us that AI's promise comes with volatility, uncertainty and responsibility.

Health and social care stands to gain enormously from well-governed AI, improved diagnostics, safer systems, streamlined operations and more empowered professionals.

But these benefits will only be realised if organisations take a measured, governance-led approach, grounded in equity, safety, sustainability and workforce readiness.

The AI revolution is here. For our sector, and for LearnPac Systems's work across compliance, governance and workforce development, the challenge is to harness it wisely, ensuring innovation strengthens care, supports staff, and secures public trust.

References

BBC News (2025). Google boss says trillion-dollar AI investment boom has "elements of irrationality". 18 November.

Author

Author avatar

Dr Richard Dune

Founder & CEO, LearnPac Systems

Published

18/11/2025