From wave of opportunity to cliff edge of risk: Government’s AI reality check

The initial government reports introducing the embrace of AI across departments was expressed as an exciting wave of opportunities, however, as departments began to be charged with developing AI strategies and initiatives this initial optimism has given way to greater awareness of potential (significant) risks.
Mercator Digital recently hosted a roundtable discussion at Government Transformation Summit in Westminster and facilitated a discussion on AI Strategy. Our co-hosts were Chris Darriet-Jones, Chief Data Architect at The Post Office and Simon Short, Data & Analytics Director at Homes England. The three discussions over the day were joined by representatives of multiple government departments and agencies including NHS, Home Office, HM Treasury, TfL, DEFRA, DVSA, and others.
Here is an overview of some of the main topics and themes that were discussed throughout the day, facilitated by Mercator’s Jon Rimmer and Alastair Williamson-Pound.
Innovation
The start-up mentality of moving fast, breaking things, learning and improving isn’t a natural fit within the public sector. Taking such risks with public money can open itself up to criticism if the benefits are not quickly realised, and when projects funds are very visibly accounted for. It would appear, for now, that the large-scale multi-year, single supplier programmes aren’t appropriate to bed-in AI initiatives. We are already seeing smaller tenders being published that are a yearlong with a plus one, and a further plus one, year of potential extension.
Ethics
There will need to be a better understanding of the ethical uses of AI. This would mean an appreciation of the potential of bias inherent within the AI models and how this could affect its use for example; decisions influencing the build of digital services that might affect marginalised service users.
Likewise, if it is utilised for the development of government policies, how decisions are reached will need to be justified. There will be a requirement to ensure these risks are scrutinised and accounted for, and mitigated, especially as public trust will be essential in the acceptance of its outputs.
Risk of litigation
Government departments and large organisations are always wary of being vulnerable to claims being made against them. It is imperative that there are clear decision logs in place, so should anything go awry visible accountability is readily available. The ‘human in the loop’ will be an important factor here to ensure proper oversight and accountability. Additionally, confidence will be required that the AI’s output does not infringe on any copyright, such as appropriating code bases from copyrighted sources. Departments may start to lean on suppliers for indemnity from litigation as momentum in this space grows.
Security risks
One huge opportunity for government in realising the opportunities afforded by AI involves opening up and sharing large distributed and siloed data sources. With this runs the risk of data breaches and so additional spend will need to be allocated in protecting this data from rogue third parties. With the recent high profile hacking cases in retail and government, understandably widespread implementation of AI tools may be cautiously slower than originally intended.
Preparing data for use
Recent research indicates that 80% of AI projects fail, and a significant factor in this is inadequate training data (Rand.org 2024). The integrity of the data held and the availability of it are imperative to maximise the benefits AI could offer. Compounding the challenge, 69% of data and AI roles are classified as 'hard to fill' according to the Government National AI Strategy (2021). This shortage means departments face months-long recruitment cycles and often lose candidates to private sector offers. So, there is lots to be done in terms of ‘organisational readiness’. There will need to be a cultural shift within departments to improve the digital literacy of its staff that will entail widespread upskilling and retraining, and there will need to be successful initiatives to attract knowledge-domain expertise to support departments. This will take time and money if we are to augment, rather than simply replace, our staff utilising AI tools.
What to do next?
Those discussion participants who had already significantly progressed with their strategies offered the following advice;
1. Lining up with the overall strategyYou need to fully understand the overarching objectives of the department and how AI might be suited within elements of this strategy. Ideally looking for some ‘quick wins’.
2. Phased approach, start small and learnIn light of the concerns as exampled above, the advice is to start small in order to demonstrate early value and grow confidence across teams. Sharing successes and failures both within departments and across.
3. Data AuditAs the integrity and security of the data is such an important foundation to the accuracy and utility of any AI, it is imperative that the data maturity of any organisation is scrutinised before embarking on significant AI implementation.
Supporting the public sector’s AI journey
The roundtable discussions highlighted both the promise of AI and the complexity of implementing it responsibly within the public sector. At Mercator, we continue to work alongside government departments to help shape and guide AI strategies that are realistic, ethical, and aligned to organisational priorities.
If you're exploring similar challenges, our team can support by helping you:
- Align AI initiatives with your overall organisational strategy
- Assess AI and data maturity, including impact, risk and cultural considerations
- Take a phased approach: scan for opportunities, pilot responsibly, and scale appropriately
- Identify high-value, low-risk opportunities for early adoption
- Ensure AI implementation maintains legal, ethical, and regulatory compliance you're navigating similar questions or challenges, we’d be happy to share our experience and help you take the next step—whether that’s assessing readiness, identifying practical use cases, or piloting early initiatives.
