Practical AI use cases in public services: what’s safe and what works?

Practical AI use cases in pubic services, what's safe and what works by Steph Troeth

There remains an opportunity to lean into new technology in ways that are not necessarily just cheaper and more “efficient” but more impactful

Late last year, we collaborated with our friends at Basis to host a workshop with public sector leaders on “Practical AI use cases in public services: what’s safe and what works?” Our goal: to create a forum where we could share learning and critically examine where AI has helped peers and service users. We wanted to distil clear ideas on what could be appropriately solved with AI, and how. 

We began with an open discussion on some case studies. Each participant identified what has currently worked well within their organisational context, what frictions and blockers they have experienced, and what opportunities they believe could take seed. 

A fascinating discussion point was how most technology companies are touting AI solutions in terms of “cashable” or “efficiency” savings, whereas true concerns centre around delivering better outcomes for the public and improving productivity for public sector employees. Relational delivery – social care, health and people-related services – carries inherently human aspects where the quality of services matters more than mere efficiency. 

With seemingly unlimited demand on these services, it might be a fallacy to assume that increasing efficiency will lead to any cashable savings. For services already under pressure, time recuperated could be far better spent on improving the quality of services. 

There remains an opportunity to lean into new technology in ways that are not necessarily just cheaper and more “efficient” but more impactful in meaningful ways, including acknowledging that some of our services need to be more relational instead of purely transactional. 

We shared dxw’s framework for evaluating and designing effective AI experiments , and collaborated with participants to turn their strongest ideas into structured, testable opportunities.  

A positive side-effect raised in this session, and also in other conversations with some of our clients, is that the discourse around AI has brought existing day-to-day challenges into sharper focus. Teams are raising their heads from the throes of daily firefighting and questioning their own processes, looking at the adoption of AI as a potential gamechanger. 

While not all problems are AI-solvable, we currently have a window to question the way things are done and find ways to make genuine improvements. In general, people want better tools; whether tasks should be automated or not should be a decision taken based on research and evidence. By framing experiments that put humans in the lead, not just in the loop, we can validate hypotheses within low-stakes scenarios, for example, by testing whether people want to adopt it and what successful use of AI would look like in practice.

Within the forum of this workshop, we also discussed how as humans, we get value from work. We thrive on a sense of purpose through what we do. Rather than being driven by the tide of AI hype, we should keep our sights fixed on how we might improve the work conditions of public sector workers and the lives of the people they serve and support.