Introducing new technologies: best practices and pitfalls to avoid

implementing new technology

What does it really take to introduce new technologies in government without ending up lost in experimentation or overrun by hype? That question shaped the keynote by Martha Bennett, VP and Principal Analyst at Forrester at Government Transformation North, where she drew on her extensive experience working across emerging technologies to give public sector leaders a practical framework for adopting generative AI responsibly and effectively.

The first point was perhaps the most fundamental - successful technology adoption starts with intention.Martha Bennett, Forrester Too many organisations begin with a technology-first question such as “let’s see what it can do”. Bennett reminded the room that this is still one of the biggest causes of failure. “How on earth do you decide whether it can do what you want it to do, if you don’t know what you want it to do to start with?”

Clarity of purpose must then be matched with clarity of measurement. Without a shared definition of success and failure, projects can drift indefinitely, said Bennett, advising digital leaders to set a time limit for early exploration. “You do not want to be in a situation of continually experimenting in the hope that it will eventually work. It tends not to.”

Realism and generative AI

Bennett also focused on the realities of using generative AI, particularly large language models. These systems behave in ways that differ sharply from traditional software. Outputs vary from one interaction to the next. That is not a flaw, she stressed, but the nature of the technology.

“We've all heard the word hallucination, which I personally don't like because it makes it sound like something's gone wrong. As an AI scientist pointed out, hallucination is the wrong word - everything a large language model outputs is a hallucination. Sometimes it just happens to be factually correct, and sometimes it's not.”

Because of this, public sector teams must decide what “good enough” means for each individual use case. For some internal tasks, a high but imperfect accuracy rate may be acceptable. For citizen entitlements, it is not. Bennett pointed to an example from New York, where a municipal chatbot wrongly told residents they were not eligible for benefits. Mistakes like that erode trust almost instantly.

Her message was that accuracy thresholds must be deliberate, specific and agreed ahead of deployment - these decisions cannot be left to technologists alone.

Designing the balance between human and machine

Another theme was the importance of designing AI systems around people, not in place of them. Generative AI can support staff and speed up tasks, but its outputs still need oversight. “This is not about replacing people,” she said. “It is about delivering the most effective outcome, which typically is a combination of human plus AI.”

The challenge is working out where that human review sits, how it is triggered and what it is checking for. Bennett encouraged organisations to design these roles explicitly at the outset so they do not become an afterthought once a system is already in use.

Experience plays a significant role in successful adoption. Organisations that have used AI operationally before tend to be more realistic about what it can and cannot do. They know how to interrogate systems, evaluate risks and distinguish between real enterprise products and early stage tools.

Bennett cautioned the audience that many tools sold today as enterprise-ready are still relatively immature. “A lot of what is being sold to you as an enterprise grade product is actually still an alpha or beta test.” That does not mean organisations should avoid them, but they must ask the right questions before depending on them for critical services.

Data: the real foundation

Despite the pace of innovation, Bennett argued that one principle remains unchanged. The quality of AI outcomes is determined by the quality of the organisation’s data. Without well-governed, consistent and trusted data, generative AI cannot go beyond generic outputs.

“Without that solid foundation of data, you will not be able to get very far with AI beyond some of the very basic, generic use cases.”

The message was that public bodies must understand their data structures, standards, ownership and curation if they want modern AI systems to deliver meaningful value. This is as much a governance issue as a technical one.

Rethinking services 

Bennett emphasised that technology alone cannot transform services that were designed around past constraints. Simply applying AI to existing processes is unlikely to produce significant benefit.

Looking at generative AI only as an automation tool, she argued, “is not going to move the needle”. Real change often requires rethinking parts of the service model, redesigning process flows or reconsidering how decisions are made. These shifts rarely happen overnight, but they are essential for unlocking the potential of modern technologies.

Taking the long view

Bennett closed by reminding the audience that expectations around new technologies often swing between hype and disappointment. She drew on Amara’s Law, which describes how people tend to overestimate the impact of new technologies in the short term and underestimate their effect in the long term.

AI progress will not slow down. “This technology is not going to go away,” she said. “It is changing faster than we can change.”

Also Read