AI for the public sector: using what we know

We already have the tools and frameworks. The challenge is applying them thoughtfully in a new context
Imagine: you have a problem that needs solving and, maybe, a tool powered by AI is looking like the right solution. But how do you turn that opportunity into a confident, evidenced decision?
How do you ensure that you’d really be solving a whole problem for users, while achieving the quality and consistency rightly required of digital public services? What can you do to confirm that this would be the right solution, built using the most appropriate technology, and that you could develop it in an open and transparent way?
You might have noticed that the paragraph above pulls directly from the language of the Service Standard, and that’s because we believe AI adoption in the public sector must start from the same principles that have underpinned successful digital transformation: user-centred design, multidisciplinary collaboration, and clear measures of success. We already have the tools and frameworks. The challenge is applying them thoughtfully in a new context.
Finding a balanced perspective
There’s no shortage of conversation around AI’s potential, with the full spectrum of perspectives ranging from enthusiastic buzz to extreme caution. Somewhere in the middle, our growing collective experience is leading to a better understanding of potential use cases which tangibly harness the transformative impact of AI while preserving the public sector’s commitment to stable service provision, fair access, sustainable change, and appropriate transparency.
But what does this mean in practice? With governments all over the world tempted to act fast to secure the benefits of AI, how can we make progress without losing sight of the real-world outcomes which should underpin all of our work in digital?
In truth, government has decades of experience navigating digital transformation at all levels, and has established robust frameworks for developing public services in a sensible and secure way. Although it’s a complex and challenging space, there have been some notable successes, skills gained, and lessons learned.
This gives government digital a strong foundation from which to approach new and disruptive technologies like AI. As with any digital opportunity, the potential application of AI should begin by drawing on established methodologies to keep user needs, organisational values, and technical feasibility at the centre of decision making.
Drawing on our expertise as a digital delivery partner to the UK public sector, this post explores some practical approaches for assessing potential AI applications based on existing tools and techniques.
Using existing tools and techniques
Focus on user-centred design principles
In a public service setting AI, like any technology, should be used where it will help us solve real problems for its users. Understanding users’ experience and needs, involving them in the process, and being responsive to their feedback, all remain essential to building a successful service, regardless of the technology which underpins it. And teams will still need to invest the time in creating usable and accessible solutions.
Part of this requires drilling down into specific opportunities – the problems which can be appropriately solved by machine learning, for instance, will be very different to those relevant to generative AI technologies.
This emphasis on value to users can be transformational: our focus on residents using the Adult Social Care service at Redbridge Council shifted the understanding of the problem from symptom to root cause, leading to a better solution which could both improve the current user journey, and achieve the Council’s goals.
Use existing frameworks (and watch out for new ones)
Proven resources like the Service Standard and Technology Code of Practice are already built into our ways of working, and their core messages around using simple, sustainable and secure design to deliver measurable value to users remain strong foundations for digital projects, including those exploring AI applications.
These can be read alongside the new guidance emerging, such as the AI Playbook or the AI Security Institute’s research reports, which help build understanding of the common risks and opportunities relevant to AI.
As with all new technologies, AI policy lags behind adoption. But in the meantime, well established technical standards can continue to guide us. Best practice principles for search engines, for instance, inspired our investigations into the best ways to handle crawlers for AI services.
Use multidisciplinary teams to build AI literacy
Use the existing model for flexible delivery teams to bring together technical, user-centred design and subject matter expertise. Embedding subject matter experts keeps the work focused on solutions with a real impact, while practitioners can shape the work around the successful digital service delivery model, starting small, testing with users, and then scaling gradually.
We believe in the strength of multidisciplinary teams: including a range of expertise and perspectives in decision making helps mitigate the risk of new projects, and we have extensive experience working in a blended way to benefit from a client’s expertise and provide opportunities for capability building. At the Department for Education we’ve spent several years delivering in collaboration with DfE staff and another agency, bringing in different specialisms as and when needed and influencing good practice for the long term.
That being said, we can also acknowledge that building capability in a new technology can feel daunting especially at an organisational scale. But whether you have the expertise in house, or are collaborating with an external AI specialist, over time this blended model exposes more and more people to opportunities to examine AI use cases, helping to build experience across your teams. Moving us on from early, siloed explorations in a distinct innovation lab.
Define and measure what success looks like
Whether spinning up an experimental testbed, or working on a more established use case, it’s important to spend time at the start understanding your intended outcomes and what would signal success against these.
Identifying these metrics, not to mention collecting and analysing them, can take time, but they are essential to understanding whether your efforts are moving things in the right direction. Over time, you build up a store of knowledge which helps you iterate in an evidence-led way, and track the long term impact of any changes.
During our work with the Ministry of Justice to build a product that improves the process of applying for short-term accommodation for people leaving prison, an initial discovery period defined 3 core objectives. During beta, the new service could then provide better data against these which was used by the team, and senior stakeholders, to focus decision making.
Solving real problems
AI implementation will need to be built using proven digital delivery approaches to achieve real success in the public sector. Teams benefit from working to familiar frameworks, and users can trust that services will be built to established standards.
Ultimately, and excitingly, this means we already have the tools to make informed decisions about AI. We have the right foundational approaches, talent, and experience to assess it calmly and deploy it wisely, while working to build our AI-specific knowledge and experience.
In our own work around AI, such as building a tool to help young people use generative AI thoughtfully, our design and development process remains focused on well-defined goals while being mindful of the requirements, constraints and technical realities which will shape the work.
Actionable steps for your team
- Assess your situation – what’s your current technology and data landscape? Does that need fixing first? What are you trying to achieve, and is there anything restricting you currently?
- Use the foundations of good service delivery to remain focused on user needs and shape your work around a clear problem statement.
- Involve everyone in the conversation – include practitioners and stakeholders early in AI ideation to keep a focus on the real problems you’re trying to solve, and to understand how any new tools could support their decision making. Document and share your findings as appropriate, and learn from others’ doing the same.
- Set clear success metrics ahead of experimentation or implementation, and measure your progress – it’s easy to lose focus and harder to assess value if your intended outcomes aren’t defined upfront.
We don’t need to wait for the ‘perfect’ AI use case, neither should we dive headlong into wholesale AI adoption. Instead, let’s get started where we see a real need, using the skills we already have in a new context to build services that work for everyone.