Opinion: Why the UK public sector is falling behind in AI

UK AI adoption

The UK government has set out one of the most ambitious AI agendas in the world. There are national strategies from DSIT and the Cabinet Office, a Digital Centre of Government, and an AI Skills Hub promising free training at scale. On paper, the UK is doing everything right. However, on the ground, the picture is very different.

The recent Public Sector AI Adoption Index 2026 surveyed over 3,000 public servants from 10 countries. In it, the UK ranks a lowly sixth out of ten, scoring just 47 out of 100. For a country with such a level of strategic ambition and AI ecosystem strength, that's not good enough.

The numbers that should worry every Government CISO

The index measures how public servants experience AI across five dimensions: enthusiasm, education, enablement, empowerment, and embedding. For the UK, the scores reveal a workforce that is aware of AI's potential but hasn't been given the tools, training, or permission to act on it:

  • Enthusiasm: 47/100 Only 43% of UK public servants felt optimistic about AI in the public sector, and just 39% described AI as empowering.
  • Education: 51/100 Around half (54%) of civil servants reported receiving no AI training whatsoever. Of those who have been trained, three quarters found AI easy to use. Proving training works when it exists.
  • Empowerment: 49/100 Around two in five public servants are unsure what they're permitted to use AI for, and 46% say leaders don't provide clear guidance on how AI should be used.
  • Enablement: 47/100 Tools exist but access is uneven across departments and often not matched to everyday needs.
  • Embedding: 42/100 AI use remains dependent on local initiative rather than systemic support. Only 17% report using AI for advanced or technical tasks.

These aren't the numbers of a workforce that has rejected AI. These are the numbers of a workforce that has been left to figure it out on their own.

The translation gap

There is genuine intent by the Government to move beyond pilots and specialist teams toward broader, everyday use. But the index reveals a stark disconnect between that national intent and the experience of frontline civil servants.

While 60% of UK public servants say AI use has increased over the past year, adoption remains largely confined to basic tasks like drafting and analysis. Fewer than one in three use AI to improve workflows, and only 17% report using it for advanced or technical tasks. The UK has high awareness of AI's potential but far fewer have experienced those benefits in their daily work.

The critical difference between the UK and the advanced adopters is not technology. It's the infrastructure of confidence. In Singapore, for example, 73% of public servants are clear on what they can and cannot use AI for, and 58% know exactly who to ask when they hit a problem. Central agencies provide shared platforms, approved tools, and practical guidance. In Saudi Arabia, a top-down national strategy linked to Vision 2030 has made AI feel like modernisation rather than risk, with 65% accessing enterprise-level AI tools and 79% using AI for advanced tasks. In India, 83% are optimistic about AI and 59% want it to dramatically change their daily work.

In the UK, by contrast, adoption is currently driven bottom-up by individual curiosity and peer support rather than organisational momentum. Colleagues at work are the primary route through which UK civil servants learn about AI, ahead of formal training or official guidance. That organic enthusiasm is valuable, but it's no substitute for systemic enablement.

The missing layer

The UK's challenge is not just about training or leadership messaging. It's about the absence of a data governance infrastructure that makes secure AI use possible at scale. Most UK government departments lack visibility into what data is being shared with AI systems. Which civil servants are using AI, and for what purposes? Whether AI-generated outputs contain sensitive information that shouldn't be shared externally? How to enforce data classification policies when AI tools are involved? For most departments, the honest answer is "we don't know."

This is where AI data governance frameworks become essential. Not as a barrier to adoption, but as the foundation that makes confident adoption possible. Data Security Posture Management (DSPM) capabilities can discover and classify sensitive data across repositories, including data being ingested into AI systems. Automated policy enforcement can block privileged or confidential data from AI ingestion based on classification labels. Comprehensive audit logs can track all AI-data interactions.

With this, though, the future looks bright. We have a public sector workforce that is ready and willing. Civil servants don't need to be convinced that AI has potential. What they need is the clarity and infrastructure to act on that belief.

Also Read