AI in government: the possibility/reality gap

AI is the hot topic of every conference these days. And the recent
Government Transformation Summit was no exception.

There were some exciting conversations happening on the day. There’s a
real buzz in the air that we’re on the precipice of the next big thing for
government. But I couldn’t quite shake the feeling there’s something
missing – and that government executives aren’t making the most of the
opportunities presenting themselves. We’re seeing a gap between what’s
possible and what organisations are actually able to achieve.
In this article, I’m exploring three steps to close that gap.

The promise of AI

It’s easy to be impressed by the capabilities of AI. It could be truly
transformative in sectors from pharmaceuticals to manufacturing. But for a
typical government organisation, AI simply represents a step up in
efficiency and capability. It won’t change what you do, but it could vastly
improve how you do it – making it better, cheaper and faster than ever.

In healthcare, AI could join up patient records to gain a better picture of an
individual. In transport, AI could make journeys safer by optimising traffic
flow. In defence, AI could be used to speed up decision making on the
battlefield. These are all real possibilities with our current level of
technology.

The failure to think big

However, despite the huge opportunity AI represents, there aren’t many
examples of bold use cases in the government sector. Many of the
examples we heard in the 2023 summit focused on everyday administration
– automating emails or driving call centre conversations – rather than the
more strategic ideas that will really move things forward.

Writing emails without grammar mistakes is a worthy goal. But why are we
not seeing more examples of ambitious projects that AI could help
governments deliver on their mission? I believe there’s a real danger of backwards thinking right now.

Organisations are at risk of being driven by the technology without
understanding the bigger picture. Perhaps going for the flashy gadget when
an old-fashioned tool might be safer and more reliable. Or seeking a miracle solution to solve overstretched budgets and free up time.

Another risk is in investing in AI specific ethics boards, rather than expanding the role of existing ethics boards. Ethics should not be technology dependent, after all.As soon as we focus on the technology, not the intention, strategy, or citizens behind it, we risk failure.

Closing the gap

To evolve those ambitious projects and achieve lasting change for citizens,
I recommend that government departments adopt an approach that
includes these 3 practical steps.

Step one: filter ideas

As one attendee at the summit pointed out to me, there’s no shortage of
tech companies out there bringing ideas to the table. Being critical and selective with AI technologies is a crucial part of good governance. Knowing what projects are worth the investment of time and budget will help you avoid expensive mistakes and make sure the technology is fit for purpose.

Step one involves thinking about the essence of what you do. Any challenges
stopping you from achieving your goals are good candidates for AI. It’s
much better to do a few things well than to simply apply AI here and there
because the technology is easily available and everyone else is doing it.

Step two: evaluate data

What data do you have available to train your AI? There are many
examples of failed projects where AI was trained on bad data and had
inherent biases.

One of the many AI tools developed to help during the pandemic was
designed to detect COVID from chest x-rays. It ultimately turned out to only
be able to detect whether someone was standing up or lying down. The tool
seemed to be correctly guessing which patients had COVID, but only
because they were more likely to be ill and lying down in hospital.

You don’t need to be a data expert to ask the right questions. When you’re
considering a potential organisational goal that AI could help you achieve,
ask yourself: where does your data come from? Who’s validated it? Is it
reflective of the real world? Are there inherent biases and what can you do
to mitigate that?

If your data isn’t up to scratch, even the most powerful of AI tools will be
useless.

Step 3: validation

You’ve uncovered some key areas where AI could improve your
organisation. You’ve evaluated your data and confirmed that it’s a
consistently high standard. But you will not be ready to launch the service
without considering the last step: validation. What criteria will your AI need
to meet in order to be useful in the real world?

Think of a self-driving car. It has a worthwhile goal (reducing traffic
accidents), and a good data set (thousands of journeys from multiple
viewpoints, covering many different situations). Would you simply press go
and hope it doesn’t cause any accidents?

A self-driving car should have to pass various tests and standards before
being allowed on the roads, just like citizens. Those setting the standards
don’t need to be AI experts – and in some ways it might be better if they
weren’t. The car might face a similar driving test to people, for example,
stopping at red lights, signalling correctly, reacting to other road users. All it
takes is an understanding of the challenge (is this car a danger to people?)
to define a set of standards based on existing principles.

Seizing the opportunity

The final point I want to stress is that while AI seems to be the next big
thing, it won’t change the fundamental mission of your organisation. I like to
think of AI as a way of complementing what you do – helping you scale up
or become more efficient. Don’t focus purely on the technology or the
processes, always consider what it means for your citizens.

Many of the conversations I was involved with at the summit reinforced that
there are exciting times ahead. It’s up to us all to steer the direction of our
organisations so we can make the most of those opportunities.

Civica makes software that helps deliver critical services for citizens all around the world. From local government to central government, to education, to health and care, over 5,000 public bodies across the globe use Civica’s software to help provide critical services to over 100 million citizens.

Also Read