The government must improve transparency around the use of artificial intelligence systems throughout the public sector if it is going to gain and retain trust in how the technology is being deployed
By
Sebastian Klovig Skelton,
Data & ethics editor
Published: 03 Feb 2025 12:15
The Department for Science, Innovation and Technology’s (DSIT’s) most senior civil servant has said government must go further in improving transparency around the roll-out of artificial intelligence (AI) systems throughout the public sector.
Asked by members of the Public Accounts Committee (PAC) on 30 January 2025 how government can improve trust in the public sector’s increasing use of AI and algorithmic decision-making tools, DSIT permanent secretary Sarah Munby said “there’s more to do on transparency”, which can help build up trust in how automated tools are being used.
Munby said the public sector needs to be clear where, for example, AI has been used in letters or e-mails from government to citizens (something she said is reflected in government guidance), as well as focus on how it communicates to people across the country on AI-related issues.
She added that if government fails to be “demonstrably trustworthy”, it will ultimately become a “blocker of progress” for the further roll-out of AI tools.
A major aspect of the government’s efforts here is the Algorithmic Transparency Recording Standard (ATRS), which was designed in collaboration between DSIT and the Central Digital and Data Office (CCDO), and rolled out in September 2022 to improve public sector transparency and provide more information around the algorithmic tools they are using.
While DSIT announced in February 2024 that it intended to make the ATRS a mandatory requirement for all government departments during 2024 (as well as expand its use to the broader public sector over time), the standard has been criticised over the lack of engagement with it so far, despite government having hundreds of AI-related contracts.
In March 2024, the National Audit Office (NAO) highlighted how only eight of 32 organisations responding to its AI deployment survey said they were “always or usually compliant with the standard”. At that point, just seven records were contained in the ATRS.
As it stands, there are currently 33 records contained in the ATRS, 10 of which were voluntarily published on 28 January by local authorities not covered by the central department mandate.
Commenting on the ATRS, Munby admitted “we need to get more out”, noting that another “20 or so” are due to be published in February, with “lots more” to follow throughout the year.
“It’s absolutely our view that they should all be out and published,” she said. “It takes a bit of time to get them up and get them running. It hasn’t been mandatory for that long, but … there’s been a significant acceleration in pace recently, and we expect that to continue.”
Munby also highlighted that “getting the law right is an important component” of building trust. “There’s quite an extensive set of provisions in the Data [Use and Access] Bill which are about making sure that where automated decision-making takes place, there are really good forms of regress, including the ability to challenge [decisions],” she said.
While the Labour government adopted almost every recommendation of the recently published AI action plan – which proposed increasing both trust and adoption in the technology through building up the UK’s AI assurance ecosystem – none of the recommendations mentioned transparency requirements.
‘Socially meaningful transparency’
In written evidence to the PAC published on 30 January, a group of academics – including Jo Bates, a professor of data and society at the University of Sheffield, and Helen Kennedy, a professor of digital society at the University of Sheffield – said it was key to have “socially meaningful transparency” around the use of public sector AI and algorithms.
“Socially meaningful transparency focuses on enhancing public understanding of AI systems for informed use and democratic engagement in datafied societies,” they said. “This is important given the widely evidenced risks of AI, eg. algorithmic bias and discrimination, that publics are increasingly aware about. Socially meaningful transparency prioritises the needs and interests of members of the public over those of AI system developers.”
They added that government should work to “reduce information asymmetries” around AI through the mandated registration of systems, as well as “by fostering discussion and decision-making between government and non-commercial third parties, including members of the public” about what AI-related information is released publicly.
Further written evidence from Michael Wooldridge, a professor of computer science at the University of Oxford, also highlighted the need to increase public trust in government AI, where transparency can play an essential role.
“Some people are excited about AI; but many more are worried about it,” he said. “They are worried about their jobs, about their privacy, and they may even be worried (wrongly) about existential threat.
“However well-motivated the use of AI in government is, I think it is likely that the government use of AI will therefore be met by scepticism (at best), and hostility and anger at worst,” said Wooldridge. “These fears – however misplaced – need to be taken seriously, and transparency is absolutely essential to build trust.”
Read more on Artificial intelligence, automation and robotics
Major obstacles facing Labour’s AI opportunity action plan
By: Cliff Saran
Transparency on public sector AI use fosters trust, says AI minister Feryal Clark
By: Lis Evenstad
Lords debate government approach to automated decision-making
By: Vipin Chimrani
Latest list of AI in government decision-making published
By: Cliff Saran
GIPHY App Key not set. Please check settings