AI Safety Institute releases new AI safety evaluations platform

Global AI safety evaluations are set to be enhanced as the UK AI Safety Institute’s evaluations platform is made available to the global AI community today, paving the way for safe innovation of AI models.

Listen to 'AI Safety Institute releases new AI safety evaluations platform'


After establishing the world’s first state-backed AI Safety Institute, the UK is continuing to drive global collaboration on AI safety evaluations with the release of the AI Safety Institute’s homegrown Inspect evaluations platform. This will be the first time that an AI safety testing platform spearheaded by a state-backed body has been released for wider use.

By making Inspect available to the global community, the Institute will accelerate the work on AI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models for a consistent approach to AI safety evaluations around the world.

AI Safety Institute Chair Ian Hogarth said: 

Ian Hogarth CBE“Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block for AI Safety Institutes, research organisations, and academia. We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board. We have been inspired by some of the leading open source AI developers which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints.”

Sparked by some of the UK’s leading AI minds, Inspect is a software library which enables testers – from start ups, academia and AI developers to international governments to assess specific capabilities of individual models and then produce a score based on their results. It can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities.

As more powerful models are expected to hit the market, its release comes at a crucial time in AI development and is said to be part of the ‘push for safe and responsible AI development’.

Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: 

Michelle Donelan MPAs part of the constant drumbeat of UK leadership on AI safety, I have cleared the AI Safety Institute’s testing platform - called Inspect - to be open sourced. This puts UK ingenuity at the heart of the global effort to make AI safe, and cements our position as the world leader in this space. The reason I am so passionate about this, and why I have open sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks of AI. From our NHS to our transport network, safe AI will improve lives tangibly - which is what I came into politics for in the first place.

Alongside the launch of Inspect, the AI Safety Institute, Incubator for AI (i.AI) and Number 10 will bring together leading AI talent from a range of areas to rapidly test and develop new open-source AI safety tools. Open source tools are easier for developers to integrate them into their models, giving them a better understanding of how they work and how they can be made as safe as possible.

New call-to-action

Also Read