Algorithmic transparency promotes public trust around AI
The dream of a more trustworthy digital public sector is slowly being realised thanks to the Algorithmic Transparency Recording Standard - an innovation that provides meaningful transparency into algorithm-assisted decision processes by enabling government departments to publish details about the algorithmic tools they use.
The Cabinet Office’s Central Digital and Data Office (CDDO) and the Centre for Data Ethics and Innovation (CDEI) - the brains behind the pioneering initiative - recently published updates to the standard following piloting across the public sector.
Speaking to Government Transformation Magazine, Louise Sheridan, Deputy Director at CDEI and Sue Bateman, Interim Chief Data Officer at CDDO, explain how the standard represents a significant step toward building a trustworthy digital public sector, by helping to promote public trust around AI.
“Being transparent can encourage a proactive culture in the public sector around embedding ethics into data and automation projects from the start,” Sheridan says.
To give a little context: the first version of the standard was published in November 2021 and piloted with ten public sector organisations through 2022, ranging from central government offices to local police departments.
In October 2022, the standard was endorsed by the DSA, the body which recommends the standards, guidance and other resources government departments should follow when working on data projects. The updated standard, alongside guidance and the completed reports are now available on the Algorithmic Transparency Recording Standard Hub.
Increasing public trust in algorithms and AI
The standard is helping to move the needle on data ethics. Transparency, explains Bateman, "is a gateway to enabling the other goals in data ethics that increase justified public trust in algorithms and AI."
The standard enables organisations using algorithms to clearly set out the facts and scope of a tool, tell its story and explain the reasoning behind its use. It allows for proactive communication with the public on the benefits and risks associated with each tool, and how these are being harnessed and mitigated respectively."
It can also reduce the risk of people opting out of services - where that is an option - because they are able to better understand how their data is being used.
According to Bateman: “the public has a democratic right to explanation and information about how the government operates and makes decisions, in order to understand actions taken, appeal decisions, and hold responsible decision-makers to account."
Under UK GDPR, citizens have the right to information about the use of their personal data, as well as to know about the use of automated decision-making, including meaningful information about how decisions are made.
The Information Commissioner’s Office took part in the piloting of the standard. In reviewing the tool, they highlighted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspectives.”
The Department for Health and Social Care and NHS Digital, described the benefit of using the tool in promoting “shared decision making with the patient” and is “an extra point of information to consider in the decision making process.”
Building momentum and best practices
The UK is one of the first countries in the world to develop a national algorithmic transparency standard. Stepping out into unchartered territory meant that the CDDO and CDEI teams were faced with a number of challenges, leading to new discoveries about best practices.
According to Sheridan, the main challenge has been building momentum for using the standard. “We have engaged widely across the public sector, making the case for transparency on the use of algorithms. In addition, we have hosted roundtable discussions with groups of private sector suppliers to gather views and incorporate them into the policy development process.”
The second challenge concerned the need to involve a broad range of stakeholders in the development and iteration of the standard, Sheridan explains. “This was addressed by carefully designing the engagement process to ensure the representation of a broad range of perspectives among participants.”
In undertaking the project, the teams found that while many public sector organisations want to be more transparent and to consider ethical questions, they might lack the guidance, capabilities or resources to do so. To tackle this, coaching calls were held with interested organisations and the CDEI/ CDDO have produced guidance for teams looking to use the standard and offer dedicated support on initial implementation.
Notably, the teams discovered that placing public engagement activities early on in the project lifecycle enabled them to act on the findings in meaningful ways and use these insights to develop the initial two-tiered design in the standard. For context, Tier 1 includes a short explanation for the general public of how and why the algorithmic tool is being used and Tier 2 provides more detail on the technical specification and decision-making process.
The CDDO and CDEI teams are currently working to build momentum and get a wide range of public sector organisations using the standard.
"The teams remain available to support organisations completing algorithmic transparency reports, and we’d welcome any questions sent to email@example.com," Sheridan says.
Looking forward, the teams are exploring the feasibility of an online repository for submission and publication of completed reports, with consideration for accessibility and searchability. "We are exploring how this repository would work for different parts of the public sector, e.g. central government, local government, police forces and the NHS," Sheridan adds.
The team is also working with the Crown Commercial Service to explore options for including requirements related to algorithmic transparency in the upcoming AI and Automation Framework.