The UK government has announced a major new international initiative aimed at tackling one of the biggest technical challenges in artificial intelligence - ensuring that AI systems behave safely, predictably and under human control.
Backed by more than £15 million, the Alignment Project will be led by the UK’s AI Security Institute in partnership with Canada’s AI Safety Institute and a coalition of global players including Amazon Web Services, Anthropic, and a number of academic and philanthropic organisations. The project aims to expand global research into AI alignment - a field focused on making sure advanced AI systems act in line with human intentions and values.
Science, Innovation and Technology Secretary Peter Kyle described the UK’s leadership on the project as “crucial,” with advanced AI systems already surpassing human capabilities in some areas.
“AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests,” he said.
“This is at the heart of the work the Institute has been leading since day one - safeguarding our national security and ensuring the British public are protected from the most serious risks AI could pose.”
“The responsible development of AI needs a
co-ordinated global approach.”
The project will provide funding to researchers, cloud computing access via AWS, and support for AI startups through venture capital, offering three distinct avenues to accelerate progress. Grants of up to £1 million will be available for interdisciplinary research, while up to £5 million worth of cloud computing credits will help scientists run large-scale experiments.
The UK’s central role highlights its ambition to remain a global leader in AI safety. “Home to world-leading AI companies and research institutions, Britain is uniquely positioned to lead this global effort,” said Mr Kyle. “The responsible development of AI needs a co-ordinated global approach.”
Guided by a global advisory board including scientists such as Yoshua Bengio and Shafi Goldwasser, the Alignment Project seeks to close research gaps at a time when AI capabilities are evolving at extraordinary speed. The 2025 International AI Safety Report warned that today’s control methods may be insufficient for future systems.
Find out more at the Alignment Project’s website.