Artificial intelligence (AI) is no longer a theoretical conversation for the public sector. The UK Government’s AI “exemplars” and a surge of live projects mark a shift from theory to implementation. But as adoption accelerates, the challenge is no longer simply whether to use AI – it is how to do so responsibly, transparently, and equitably.
That was the focus of our wide-ranging conversation with Victoria Evans, Partner at Oliver Wyman, who brings experience advising both public and private sector organisations on digital transformation.
“The debate around trust sharpens,” Evans says. “Not just in technology, but in institutions as a whole.”
She notes that citizens already use AI every day, whether in education, healthcare, or online services, and therefore expect government to keep pace. “It’s not really a surprise. Although there has maybe been a reticence or hesitancy from government in adopting AI previously, I think the time is quite critical now to seize the opportunity. Leaders across government really need to accelerate their responsible use of AI and make it part of their leadership story.”
Crucially, trust requires candour. “When leaders are really candid about what the potential is and what the limits are, and show clearly how it is going to benefit citizens, that’s hugely important,” Evans says. AI should not just be about efficiency gains but should “mirror governance, values, leadership and ambition.”
This shift, she argues, demands that government take a proactive stance: not simply regulating AI or playing catch-up with big tech, but leading visibly and collaboratively. “Citizens are expecting government to step up and lead - not lag.”
Evans is clear-eyed about AI’s risks, particularly its potential to embed existing social inequities more deeply. “There is a real risk that AI perpetuates those biases because the data it is learning from is baked in historical data,” she warns.
Yet she sees an opportunity here too. “AI could give us a moment to break those cycles. Can we use AI to highlight how decisions are made? Human systems are concealing this bias… but AI is going to make it visible much more quickly, so we can scrutinise those outputs more quickly, more accurately.”
To seize that opportunity, leadership and design choices are critical. “We need diverse teams, diverse data sets informing AI design, and we’re embedding governance that is routinely testing for those inequities,” Evans says. “Debates over ethics and AI shouldn’t distract us from the fact that biases already shape human systems. There’s an opportunity for AI to really confront those systems and correct them.”
Evans, who is also a judge for the Black British Business Awards, believes the stakes are high - left unchecked, algorithmic bias “undermines the mission of public services to treat citizens equitably” and “marginalises people more and more.”
Transparency, Evans argues, is central to maintaining public confidence. She points to critical areas such as housing allocations, benefit fraud detection, and social care decisions as examples where even a small bias can have systemic consequences.
“AI outputs should be much more explicit, they should be more traceable, they should be more scrutinisable – but only if the governance is built in right from the start,” she says. This means designing accountability into systems rather than replicating existing processes. “What’s the strategic intent? What’s the outcome we’re trying to get to? Rather than just trying to accelerate existing processes, [we should be] redesigning systems in a way that gets us to that strategic intent.”
One of the most difficult questions facing public sector leaders is how to balance the need for regulation with the imperative to foster innovation. Evans warns against an overly cautious approach. “Regulation should be a guardrail, but not a handbrake,” she says.
Her concern is that government could become trapped in “cycles of debating what regulation looks like and trying to over-regulate something that is really difficult to regulate.” The pace of AI development means that by the time regulation catches up, the technology may already have moved on.
“The best regulation tends to happen when policymakers are co-creating with industry, they’re co-creating with citizens, they’re thinking about what’s proportionate, what are the risks, how do we build confidence,” Evans says. Strong, visionary leadership is essential: regulation must “co-evolve” with innovation rather than lag behind it.
If government gets this right – embedding trust, fairness, transparency and accountability into AI from the outset – the potential benefits for society are enormous.
“If designed with real intention, then it could be a really powerful equaliser,” Evans says. She envisions a future where AI expands access to credit for underrepresented entrepreneurs, accelerates access to education, reduces opportunity gaps in employment, and helps shape communities’ engagement in future industries.
Evans sees this as a “once in a lifetime opportunity” to hardwire fairness and inclusion into the next generation of public services. She cites Mo Gawdat, former Chief Business Officer at Google X, who describes AI as a “junior” that we are collectively teaching. “It brings a really powerful message that it’s our moment to teach the values that we hold as a society,” she says. “If we steer it now with trust, fairness, transparency and bold leadership, we’re not just shaping a tool – we’re shaping the future of governance and equity.”
For Evans, this is both a challenge and a call to action. Public sector leaders must not only adopt AI but do so in a way that strengthens public trust, dismantles systemic bias, and creates fairer systems for future generations.
“We could be redesigning where we see value set,” she concludes. “Using AI to accelerate that human connectivity and also redesign trust and systems.”