What is the UK Government actually doing with AI – giant leaps and baby steps, one year on

On available evidence, there simply isn’t enough relevant expertise in central government departments to deliver on their AI aspirations

About this time last year I wrote a blog post entitled What is the UK Government actually doing with AI? Back then, I observed that public sector AI looked poised to grow fast, albeit from a very low base. In retrospect, what’s been interesting about the last 12 months isn’t that this not-very-startling prediction is being fulfilled; it’s been just how easy confirming it became in 2024.

Last year, trying to figure out what public sector bodies were doing with this relatively new and anxiety-provoking tech involved cobbling together details from various think-tank reports, deep-dive searching of departmental websites, scouring public contract databases, and literature searches for other people trying – usually unsuccessfully – to do the same thing.

This year, it involved attending a couple of webinars from the Incubator for AI and the National Audit Office, and then reading a report. AI implementations – significantly, not AI speculation, not AI use-cases, but actual, deployed systems – have become, if not exactly ubiquitous, much more mainstream in central government. And in a way that civil servants are not just ready, but eager, to talk about.

AI: the new civil service normal?

For the most part, central government applications for AI are the same in 2024 as they were the previous year. Artificial Intelligence is a popular (and increasingly well-explored) tool for image analysis and classification (for example of traffic data), fraud detection, and data and operations analysis. In addition, the growing power and popularity of generative AI has brought an uptick in virtual assistant and machine-summarisation work.

What’s changed is two-fold. First, who’s actually doing it. Where AI applications before were scattered in relative isolation across a handful of departments, as of March 2024, three-quarters of the 87 government bodies who responded to an NAO survey on the matter reported they either already had AI implementations in place, or had immediate plans for their development.

Those who don’t already have work underway are expected to lay the foundations to start: by the end of June of this year, every department is expected to have ‘costed and reviewed’ plans for AI adoption in place. These plans are to be supported by an exercise identifying strategic datasets in principle completed by every department last February, and wider data maturity assessments are scheduled for finalisation this Autumn. Work tackling legacy systems incapable of supporting AI data requirements starts in 2025.

Contextualising these technical and data explorations is a host of policy work, with the Central Digital and Data Office (CDDO) setting strategic direction, and the Department for Science, Innovation and Technology (DSIT) drafting governance rules, guidance, and standards. Initiatives intended to allow civil servants to move confidently and swiftly where uncertainties around data privacy and other ethical considerations previously forced them to tread with caution.  

Skewed distribution: where AI is thriving

This across the board uplift in ambition, however, masks extreme differences in capacity for AI work and the potential impact it might have.

On the one hand, a small handful of organisations are showing just how much can be achieved with Artificial Intelligence and Machine Learning in public administration – and why the government is investing so much effort in their wider adoption. The flagship unit here is the Incubator for AI, which despite its relatively brief existence already boasts an impressive range of achievements. Notable successes include techniques for the automated detection of ‘polypharmacy’ risks associated with prescription of five or more drugs on the NHS; algorithms speeding the allocation of housing for asylum seekers with savings of £30 million per year; and auto-summarisation and analysis of public-consultation documents, projected to save roughly £80 million per year. 

Almost as impressive – or perhaps even more so, given that AI is only one of its many areas of development – is the internal work undertaken by the ONS. Over the past year ​​they’ve trialled AI to improve information retrieval, and to help standardise data collection and conformity with controlled vocabularies. More than just confirming my pet theory that ‘AI needs clean data’ is an invertible statement, in the context of public administration these are hugely significant developments. When you consider the scale of the ONS’s operations and its role as the repository of almost all quantitative data across UK Government, this looks set to be a rising AI tide lifting all civil service boats.

These advances are by all indications, likely to continue: particularly telling was an ONS speaker’s glancing reference to AI technologies as ‘easy to use’. Despite the largely bottom-up nature of AI development, the team had clear strategies in place for dealing with hazards like prompt injection, and were clearly used to swapping different Large Language Model (LLM) backends in and out to suit their needs.

And where it’s (mostly) not

Outside organisations like i.ai and the ONS with large numbers of quants-on-the-ground, however, the picture for AI adoption is much less clear.

On available evidence, there simply isn’t enough relevant expertise in central government departments to deliver on their AI aspirations. 70% of those surveyed said lack of technical expertise was preventing them from using AI, with an equal number reporting that they lacked funding to address this. This would seem to support i.ai’s claim that civil service Digital, Data and Technology (DDaT) salaries are simply insufficient to attract and retain skilled staff. And data science/AI/ML skills are not the only missing factor: 62% of respondents felt their data was inadequate for AI scenarios, while 40% believed their underlying technical infrastructure simply wasn’t up to the job.

In addition to high levels of technical uncertainty, the provision of central guidance has yet to entirely allay related ethical and other concerns: civil servants are wary of the legal (67%), security (56%), and data privacy (56%) issues raised by AI adoption. Unsurprisingly, demand was high for greater central support in this area, with knowledge-sharing across government (74%), skill development (67%), and risk management (63%) featuring high on civil service wish-lists. 

The CDDO, DSIT and i.ai, then, clearly have a long road ahead of them. While the potential AI technologies hold for positive change is evident, fine-grained strategies will need further development if they’re to reassure directors their use of the technology is in fact legal, reliable, and safe.

On a technical level, it is very hard to see how i.ai’s 70 specialists can possibly stretch to cover the 200 or so AI use-cases identified by the 60 departments who feel they don’t have the skills to deliver them. And that’s before any consideration of the need to modernise infrastructure: anyone who’s done any hands-on work with central government data stores will know the 2025 plans to identify high-risk data systems is likely to generate a very, very long list indeed.  

Without significant further central government funding, then, AI demand – not only data-science expertise, but also to address foundational technical and ethical concerns –  looks likely to outstrip supply for the foreseeable future.