Frontier AI Taskforce creates safety research team

The UK government’s Frontier AI Taskforce is establishing an AI safety research team that can research and evaluate risks associated with advanced AI development.

The Frontier AI Taskforce, which was set up earlier this year to focus on developing the responsible use of AI, has been working with various tech organisations, including RAND, ARC Evals, and Trail of Bits. 

It has now partnered with three more – Advai, Gryphon Scientific and Faculty AI – who will form the AI safety research team. 

These new contracts will tackle questions about how AI can improve human capabilities in specialised fields and risks around current safeguards. These findings will be incorporated nto roundtable discussions with civil society groups, government representatives, AI companies and research experts at the AI Safety Summit next month.

John Kirk, Deputy CEO at ITG said: “Seeing experts collaborate to tackle cautions and fears surrounding AI is key to enhancing confidence for its widespread adoption. AI has the potential to accelerate business operations in all areas, and the UK establishing such a team helps better position it for tech superpower status.

“All sectors shall benefit from its safe development, and with confidence, the creative industries will be able to enhance campaigns on a global scale, working hand-in-hand with such innovative tech.”

The announcement follows a progress report last month, where the Frontier AI Taskforce announced its expert advisory panel establishment, the appointment of two research directors, and several partnerships with organisations.

New call-to-action

Also Read