Unleashing the full potential of AI in the public sector

It’s estimated that 39% of public sector working hours will benefit from human AI collaboration and optimisation. It's also been predicted that AI will add nearly $16 trillion of productivity improvement to the world economy by 2030. “Government has a responsibility to take advantage of that,” says Sharon Moore, Chief Technology Officer for Global Government at IBM. “Now is the time for government leaders to seriously consider how to prepare their internal data for harnessing the full potential of AI and Generative AI; to enhance both citizen and employee experience within the public sector.”

Moore lists a number of notable government use-cases for AI, including application modernisation, interaction enrichment, through virtual assistants for example, and generating synthetic test data where there are sensitive and classified datasets. However, despite the many potential advantages, government agencies are still grappling with how to implement AI, and Generative AI in particular. While government executives surveyed by IBM in 2018 overwhelmingly stated they would invest in AI, today they are “largely still in the experimental phase,” merely looking at the huge potential of AI, Moore notes.

The only way is ethics 

Government’s abilities to scale their AI, Moore argues, relies on having the appropriate data ethics and management rules in place. “If we're going to get past experimentation and actually scale to get the benefits of these solutions, we have to build responsibility in at the start, not as an afterthought." 

Building a framework to understand the potential AI use cases and related risks is “the only way to move forward on AI” and “government in particular should be setting the standards for how we go about those things,” MooreSharon Moore MBE colour photo continues. At IBM, she says they see five areas government leaders should focus on in their discussion and execution of responsible AI: fairness, privacy, explainability, transparency and robustness. 

“Any solution that includes AI must be transparent and treat people in an equitable way. We also have to mitigate biases in the process. As humans, we all have these biases and we haven't necessarily always been holding ourselves to account. Adopting trusted, responsible AI is an opportunity for us all to do better.”

As this technology evolves, the governance frameworks in place to support it must also keep pace, Moore warns. “The same way vegetables start to go off if you leave them in the fridge too long, AI models might no longer be applicable and you want to be able to spot that and react accordingly.” 

IBM’s new Watsonx.governance platform will allow teams to do just that; direct, manage, and monitor their organisation’s AI activities so its use can be consistently explained and changed if need be. “It allows them to understand and control the full AI lifecycle across the different tools that people might be using and the different applications that have the data. This governance piece is where we're seeing a lot of traction from government right now,” Moore says.

Keeping pace

“This is an area where if organisations don't keep up, they're not only going to be left behind - they're going to be left out completely,” says Richard Patterson is Chief of Data and AI at the Defense Security Cooperation Agency in the US Federal Government. The addition of AI into his remit is a new development; one that signals a recognition of the strategic importance of this area. “It’s a move that I think a lot of organisations are making here in the richard pattersonStates. Data really leads the way in allowing for positive outcomes with AI so having this area of responsibility reside with the Chief Data Officer makes a lot of sense in morphing that role to be more.”

As AI continues to mature, Patterson says the public sector is faced with the challenge of keeping up pace. “What really keeps me up at night is how do we move forward with some of these technologies and roll them out at a pace where we're not left behind? Things are moving so fast in the world of AI. As soon as I see something that I think we can benefit from, there's already something else taking its place.” 

He adds: “A lot of people struggle with knowing when to make the investment leap because we haven't gotten to the tipping point yet where costs are coming down and adoption is going up." With large suppliers like Microsoft and Google rolling out AI technologies that are baked into existing platforms, it makes AI more “palatable, accessible and cost efficient” for the public sector, Patterson says. “This is really going to help us to not chase the shiny objects anymore - which is what it feels like I’ve been doing the past couple of years.”

As the datasets required to train AI get bigger, so too do the stakes of government failure, he explains. “It has become increasingly challenging to manage the risk and make sure that it's done correctly.” Patterson believes government need to “find a way to get through that red tape” and test AI solutions “in order to adopt them, integrate them and to build on them faster.” 

Bringing together AI expertise 

Mark Azzam is Chief Data Officer of Germany's Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection. He says that while AI is something every department wants, few have the capacity or are willing to put the time and effort into laying the foundations around it. “They want their bread but they don’t want to work to harvest the wheat,” he says. “As soon as you ask people to look at the way they are managing data and what the governance situation is they say ‘this is exhausting’ or ‘I don't have time’.”

Educating non-technical colleagues about what is needed to get there is especially challenging when the landscape of what is possible is changing practically every day. “It’s hard to keep up because it's always a question ofMark Azzam what the whole organisation is willing to do,” Azzam says. “You might be in charge of data, but you don't have that data - it is spread over an organisation.”

Organisations must focus on building data skills into their capability frameworks to ensure continuous learning around this, he adds. 

To really move forward on AI, however, Azzam believes there is a need to bring AI and data knowledge together with different fields of expertise. “Let’s take environmental politics, for example; there is a vast group of hugely knowledgeable people in this field but they are no AI experts. However, as is the case for most sectors, AI is highly relevant and is only becoming more so over time.”

According to Azzam, the question becomes how to enable people to build a connection between their area of expertise and what AI can deliver to it; providing them with expert systems and special technology that is aimed towards their needs beyond generic AI. 

“Right now we are not there; we are still talking about what everybody needs. But there is much more to gain when we look beyond this at the more specialised cases,” Azzam says. “Once that is solved that’s when the real work begins. This is just the basics we are trying to solve right now; the next step is much greater.”

Government Data Forum

Also Read