Data

The public sector’s techno-amnesia finally has a solution

Written by Nick Loba | Mar 6, 2026 11:11:08 AM

Institutional memory loss is why so many large-scale modernisation programmes fail. Now, there’s an AI-powered way out.

The UK public sector has a legacy tech problem. But it’s not one we usually talk about. When ministers, permanent secretaries or chief execs discuss legacy concerns, they gravitate to ageing infrastructure, outdated programming languages or moving to the cloud. Those are real issues, but they are subordinate to a larger problem.

The deeper challenge is that many departments and offices no longer have a clear, end-to-end understanding of how their own systems actually work. For the most part, government technology still runs. But the institutional memory that explains how has quietly disappeared. Now, AI presents a way to rediscover the truth understanding of how their own systems

The memory loss epidemic

You can see evidence of institutional memory loss across the public sector estate. Many integral systems are older than the people maintaining them, and departments rely on platforms that have absorbed wave after wave of policy change. Even DSIT admits it doesn’t have “an effective way of assessing the scale of the [legacy] problem.”

There is no single root cause. Incomplete documentation is partially to blame. But the cumulative effect of retirements and staff turnover, policy layering, regulatory change, crisis driven fixes, and years of (once temporary) workarounds have all taken their toll. Today, teams describe how their systems should behave, or how they worked years ago, not how they operate today. -driven fixes

This matters because it undermines almost every strategic ambition. Government modernisation programmes regularly spend their first 18 to 24 months simply trying to understand existing technology. Discovery phases balloon as teams run interviews, search SharePoint, redraw process maps and reverse engineer integrations. Modernisation begins not from evidence but from assumption, with programmes first trying to reconstruct the present before they can design the future. After all that, there are still blind spots. So, decisions get made based on partial truths and educated guesses.

It’s therefore become the norm for large programmes to run over deadline, over budget or into unexpected risk.

Memory loss also weakens resilience. Departmental and technology heads are accountable for systems whose behaviour and functions they cannot fully explain. Over time, small divergences between policy and implementation accumulate. Citizens experience those divergences as inconsistency, delays or unfairness, but for departments they represent risk. This isn’t the result of individual failure, but an inevitable by-product of decades of layered policy, technology drift and staff turnover.

Reversing the trend

Traditional remedies aren’t the fix. Interviews recover opinion, not fact. Documentation (if it exists) often reflects ideals rather than reality. Reporting tools show performance, not functionality. And no human team can read and reliably interpret millions of lines of code and configurations across dozens of interdependent services. The memory loss problem within governments and enterprises has outgrown human capacity.

Recent advances in AI now make this reconstruction possible at scale. When applied as a comprehension capability rather than a productivity shortcut, AI can decode legacy estates and translate their embedded logic into a coherent, verifiable model of how systems actually behave. At Netcompany, we’ve applied this approach through our Feniks AI capability, designed specifically to reconstruct institutional memory and provide a trusted foundation for safe, accelerated modernisation.

Feniks can read application code, database schemas, batch scripts, and configuration files across a legacy estate and infer the rules they implement. It can connect decades of design artefacts, change requests and incident logs to what is actually happening. And it can trace data as it moves between systems, exposing how key fields are derived, transformed and consumed. Afterall, the legacy tech still “remembers” every rule, exception and dependency. AI gives the organisation the ability to see and explain that embedded knowledge again.

The result is a machine validated model of the estate: what systems do today, how they fit together, where the friction points and contradictions lie. That reconstruction can then become the foundation for how to progress. It reduces risk because assumption and guesswork are replaced by known entities.

This enables a new mindset to emerge. Instead of accepting that systems will inevitably become messy and brittle until they have to be replaced in a single, high-risk “modernisation”, organisations can treat understanding and knowledge as something that’s continuously renewed. As changes are delivered, the model is refreshed. Knowledge is retained and transformation becomes a regenerative process. This doesn’t mean preserving legacy indefinitely. It means eliminating the uncertainty that makes replacement so risky and expensive in the first place.

In effect, organisations move from periodic, high-risk transformation cycles to a model where understanding evolves at pace, preventing systems from ever becoming unknowable “legacy” in the first place – a stated goal of DSIT.

For the UK public sector, the next decade will bring greater demands alongside tougher expectations from citizens, ministers and regulators. Meeting those demands with the legacy systems we rely on today will be impossible if we continue to fly blind.

Rebuilding institutional memory is essential for the public sector to break free from legacy and modernise the services people depend on. That rebuild starts by remembering how things fit together. Now for the first time, we have the tools to do so – at the scale required.