Scaling Agentic AI in Government
Government has proven the value of AI coding assistants. But the technology is moving fast. The shift from AI assistants to AI agents - capable of handling entire workflows autonomously - creates a step change in both capability and governance.
We recently brought together senior civil servants and industry experts to examine a key question: how quickly government should move from AI-assisted tools to fully agentic systems.
The workshop, organised by Government Transformation Magazine in partnership with Cloudcaler, gave attendees a platform to focus on the practical, organisational and governance challenges that will need to be addressed if agentic AI is to deliver real impact across public services.
From Assistants to Agents: Scaling AI in Government on Secure Foundations - was also supported with presentations from Cabinet Office’s 10 Data Science team, Government Digital Service, Anthropic and AWS.

While the discussions and government presentations were covered under the Chatham House Rule, we heard from a series of AI industry specialists as they focused on how the government could shift from AI-assisted to AI-driven workflows.
Secure foundations for bold innovation
“Act now.”
This was the central message of Aidan Grace, Head of Applied AI at Cloudscaler.
For Grace, despite all the challenges in implementing AI agents in the Civil Service, the choice facing the government is strikingly simple: “the train is leaving with or without you”, so adopt AI or risk being left behind.
Nonetheless, Grace acknowledged some of the real fears people hold concerning the adoption of artificial intelligence in the workplace - anxieties over the rate of change, worries about potential risks, and concerns about AI exposing data issues within the workplace.
He also mentioned one of his own fears: that AI could result in a concentration of power. However, he argued that government institutions have the ability to ward against these problems if they are able to overcome their inbuilt, and unsuitable, tendency to resist rapid change, and adopt “native AI” practices.
With 77% of workers stating that AI has increased their workload, and 88% of the heaviest AI users reporting burnout symptoms, Grace cautioned against pursuing productivity at the expense of employee wellbeing. He emphasised how the expertise and continued happiness of employees remains central to the effective running of government departments, just as it does businesses.
He also highlighted the need to “meet people where they are” when it comes to customising AI training, and confronting the distrust that poorly-managed AI initiatives can create among workers that risks undermining innovation efforts. “The tasks have changed,” he said, “but the job hasn’t been eliminated.”
Grace stressed the importance of bottom-up transformation, where employees closest to operational processes redesign workflows using AI tools.
Closing his address, he outlined the “once-in-a-lifetime opportunity” civil servants have in front of them.

Agentic AI in government
James Lowe, from the Applied AI team at Anthropic, began his presentation by referencing his own extensive career in the Civil Service, an experience which had naturally given him insight into the decision making process.
One conclusion he had drawn from this informed the core point of his presentation: that “the technology isn’t the blocker. The permission is”.
Lowe pointed to the adoption of agentic AI in heavily regulated industries such as pharmaceuticals, finance, and professional services, and asked the audience to consider why government implementation of such agents is far more limited in comparison, despite the government’s data rules not being significantly more stringent than those of these industries.
The answer, he said, was in the organisational complexity of government, with departments often operating with fragmented approval structures, which make it difficult to identify who has the ability to approve the adoption of a programme utilising AI, and who is responsible for implementation.
To demonstrate that the technology is already here, he also outlined two AI agent-powered Anthropic products which he argued would have the ability to facilitate the work undertaken by civil servants: Claude Code, which aids in software development, and Cowork, which is significantly more accessible for knowledge work. However, Lowe stressed that such systems only become transformative when connected securely to departmental data and internal tools.
A major focus of the talk was connectivity and governance. There are two questions civil servants must ask when seeking to facilitate the implementation of AI, Lowe argued. These are “who is allowed to say yes?” and “who has the ability to implement?”.
He added: “The departments that move fastest on agentic AI won’t be the ones with the best AI strategy, they’ll be the ones who worked out who is allowed to say yes.”
From Assistants to Agents
Finally, the speech delivered by Andrea Bureca, Head of AI at Amazon Web Services, focused on the “art of the possible”.
He stated that while many organisations have already adopted AI copilots for tasks such as coding, document drafting, and minute-taking, these tools often improve personal productivity without impacting organisational productivity in a significant manner.
“The actual value comes where you get AI behind the queue of work” rather than beside it, he said. This is achieved by embedding agentic AI directly into operational systems, which can produce a model in which AI systems autonomously process workflows before escalating final decisions to humans.
To illustrate this point, the presentation highlighted examples from the private sector.
The first of these was from payment platform Stripe, where engineers automate certain processes which would otherwise have contributed to a backlog, through the simple sending of an emoji - handling over 1,300 such cases a week in this way.
He also drew on the example of Allianz, which has deployed AI-driven insurance claims processing that reduced handling times from days to minutes while maintaining human oversight at the end of the process.
Seeking to dispel fears around safety issues, he emphasised that “today, the technology allows you to get the governance built in rather than built on”. Furthermore, effective deployment, he said, can be facilitated through the use of secure sandboxed environments, detailed audit trails and human-in-the-loop controls.
He warned organisations against building isolated AI pilots which cannot scale. Instead, he advocated for reusable “agentic mesh” or “AI factory” models, where tools, workflows and governance structures can be shared across departments and future projects.
Closing the session, he urged organisations to start their journey into agentic AI adoption by targeting repetitive, time-consuming tasks handling large volumes of work, saying these areas are the easiest places to show the benefits of AI at scale.

