Ada Lovelace Institute urges caution over AI scribe use in social care

The Ada Lovelace Institute has warned that the rapid rollout of AI transcription tools in social care risks outpacing safeguards, with local authorities prioritising efficiency savings over evidence of impact on vulnerable people.
In a new report, Scribe and prejudice? Exploring the use of AI transcription tools in social care, the institute examines how generative AI-powered tools are being deployed across 17 local authorities in England and Scotland. Based on 39 interviews with frontline social workers and senior managers, the research finds widespread adoption driven by financial pressures, staff shortages and administrative burdens.
AI transcription tools – sometimes described as “ambient scribes” – use automated speech recognition and large language models to record, transcribe and summarise conversations. In social care, they are being used to draft statutory assessments, case notes and care plans. One tool was already in active use by 85 local authorities in early 2025.
The institute found that most local authority evaluations focus heavily on productivity metrics, such as time saved on paperwork or increased case throughput. While many social workers reported meaningful reductions in administrative workload, the report cautions that efficiency gains are being measured more consistently than impacts on care quality, professional judgement or service users’ experiences.
Researchers identified risks including hallucinated or inaccurate content entering statutory care records, inconsistent “human in the loop” oversight, and the potential for bias in transcription affecting people with underrepresented accents or speech patterns. Social workers remain legally accountable for records, creating new professional liabilities where AI-generated errors are not detected.
Governance arrangements also vary widely. Some local authorities have limited use of AI transcription in high-stakes statutory processes, while others have proceeded with broader deployment. The report highlights inconsistent guidance from regulators and a lack of consensus on boundaries for use.
To address these gaps, the institute has urged the UK government to mandate reporting of AI transcription use through the Algorithmic Transparency Reporting Standard and to extend coordinated pilots across multiple public sector contexts . It also recommends establishing a “What Works Centre for AI in Public Services” to build a stronger evidence base and support independent evaluation .
Further proposals include clearer professional guidance for statutory proceedings, collaboration with regulators, and ring-fenced funding for research on systemic impacts.
