Teacher-centred strategies to identify AI opportunities in schools

One of the most significant challenges in school AI implementation is the divide between how leadership perceives AI and how teachers experience it
Co-authored by Lola Harre dxw and Madiha Khan Educate Ventures Research
The education sector is awash with promises about AI’s transformative potential, but cutting through the hype to understand what genuinely works in classrooms remains challenging. Impact depends not just on the AI use-case itself, but critically on context and implementation models. By centering teachers in our implementation strategies, we can move beyond theoretical promises to practical classroom solutions.
This is the second in our series for the education sector, which brings together two complementary frameworks – EVR’s 4D strategy and dxw’s iterative approach to experimentation – to explore practical techniques for effective AI implementation.
The EVR 4D strategy framework – encompassing Governance, Iterative Evaluation, Technology Infrastructure, and Staff Capability – provides a structured approach for considering various aspects of teacher needs, challenges, and perspectives on AI.
dxw’s approach, grounded in digital delivery experience across the public sector, recommends approaches which balance user needs, organisational drivers, and the policy landscape in education to support the successful implementation of opportunities. This particularly encourages schools to focus on purpose – the problem they are trying to solve – to provide a clear goal against which progress can be measured and decisions made.
AI opportunities in schools
When we look at AI opportunities through the lens of teacher needs, several categories emerge. In teaching and learning, generative AI offers personalised learning pathways that adapt to individual student needs, automated feedback systems that free teachers to focus on deeper engagement, and content generation tools that help create differentiated materials.
There are also opportunities for AI tools which streamline operational tasks like reporting and data entry, or automated scheduling and resource allocation, reducing the administrative burden and enhancing parent-teacher communication. Consider also attendance tracking systems that identify at-risk students early, predictive analytics for student support interventions, and accessibility tools that support diverse learners. These potential opportunities need to be carefully evaluated, considering the limitations and risks of AI tools such as potential bias, inaccuracies and overreliance.
Hardware and infrastructure decisions matter too – the choice between being a ‘Microsoft school’ or a ‘Google school’ has significant implications for AI procurement and safe use. These decisions affect not only technical capabilities but also training requirements, data governance, and the types of AI tools readily available to teachers.
This blog post will focus on how schools could drive confident and effective adoption of AI tools by their teachers and staff, to deliver both individual and collective productivity gains and allow staff to focus on meaningful engagement with students.
Collaborating with teachers to design the right solutions
One of the most significant challenges in school AI implementation is the divide between how leadership perceives AI and how teachers experience it. While school leaders may envision streamlined operations and enhanced learning outcomes, teachers often face practical concerns about classroom management, pedagogical adaptation, and time constraints.
This disconnect underscores the importance of participatory design techniques, such as logic modelling, when introducing new AI use-cases. These approaches not only bridge the perception gap but also test assumptions about how AI will actually function in classroom contexts. When teachers are involved from the outset in defining problems and co-designing solutions, implementations are more likely to address real classroom needs rather than theoretical possibilities. Operationalising research frameworks (such as the TPACK framework) can further help recognise the impact of variations in pedagogical approaches and technical expertise.
By involving teachers and other staff in defining an approach to AI that is appropriate for them, this co-design approach will help ensure that AI adoption is grounded in real problems to solve, and delivers real benefits. This kind of meaningful participation will also make staff feel listened to, and gives them a greater sense of ownership over and buy-in for the approaches decided.
In dxw family member Neontribe’s recent work with the the National Citizen Service (NCS) and Centre for the Acceleration of Social Technology (CAST), we took a comparable co-creation approach to help 15 – 17 year olds understand, examine and use generative AI in thoughtful and positive ways. Working closely with young people to understand their perspectives and define which problem to solve, we then developed a prototype LLM ‘Real Chat AI’, which uses guidance and feedback alongside the chat to teach its users how to understand and improve their interactions with AI tools. This experience also demonstrates how design techniques already in practice, such as the Design Council’s ‘Double Diamond’ model, continue to be relevant when examining AI opportunities.
Effective implementation requires helping teachers navigate the pedagogical shifts necessary for meaningful AI integration. This goes beyond understanding what AI can do to exploring how it changes teaching practice itself: creating the space for teachers to reimagine lesson structures to incorporate AI tools effectively, developing new assessment strategies that account for AI assistance, creating learning experiences that leverage AI while maintaining human connection, and understanding when AI enhances learning and when it might hinder it. Time and investment is needed to embed lasting and meaningful change.
Prioritising support and training to embed new tools
School leaders frequently express frustration about a critical sequencing problem: they cannot effectively address AI literacy for learners until their teachers are fully trained and confident. Add to this a sense of urgency with AI development and student uptake often ahead of teacher readiness. This creates a cascading challenge where students are already using AI tools, often without guidance, while teachers lack confidence to address AI use constructively. In this context, schools struggle to develop comprehensive AI literacy curricula, and the pace of AI development outstrips policy and training timelines. Addressing this means acknowledging that teacher capability development must be the foundation of any student AI literacy initiative.
While government-published safe AI use resources provide some guidance, teachers need practical implementation tools they can use immediately – self-evaluation checklists, action research templates, and concrete examples of successful integration. The gap between theory and practice remains one of the sector’s most pressing challenges, and it’s here that many regional initiatives fall short, providing policy and frameworks without the practical tools teachers need for daily classroom use.
To help make this jump from theory to practice, it’s important for schools and academy trusts to learn from the experiences of the sector. By demonstrating the common questions and approaches being defined by the wider community, this approach can also build organisations’ confidence as they begin to adopt AI. For example, the ‘Shape the Future Leaders coalition’ aims to surface insights informing how schools can respond to the provocation of AI. It facilitates action research by school leaders across seven key research strands, recognising the need for evidence on impact and implementation of AI.
As mentioned earlier, teachers are using AI in multiple ways; for example, they are using it to rapidly develop differentiated materials for learners with varying needs and preferences, to diagnose common misconceptions and tailor instruction, or to reflect on their own professional development through an ‘AI coach’. However, there is a need for more rigorous research, and evidence-led case studies to be made publicly available across the sector.
In the meantime, AI experiments in government and the wider public sector have produced learnings. One example is the UK Government’s 2024 Copilot trial, which gave over 20,000 staff across 12 departments access to Microsoft’s AI service for 3 months and demonstrated the importance of up-skilling as an essential part of digital change. The aim was to understand if and how AI assistant tools could improve productivity and reduce time spent on routine tasks within this complex and data sensitive environment. And, on average, participants reported saving an average of 26 minutes per day – which would be equivalent to nearly two weeks per year – and had a strong positive sentiment towards the tool, with 70% feeling that they could shift their time from mundane to strategic tasks.
But, crucially, there was also a correlation between a user’s confidence and familiarity with AI and the time savings they experienced, demonstrating the importance of support and training to enable effective implementation. Participants were onboarded using a range of support materials and events, but mentioned the steep learning curve to achieve effective prompts and that it could be difficult to make the time for this due to their existing work capacity. To address this, additional measures such as bite sized training videos were implemented to streamline engagement. So, careful change management which includes training and engagement to build confidence in new tools is important to plan for.
Investing in training and support isn’t an optional investment, it’s fundamental to success.
Taking steps towards teacher-centred AI adoption
Teacher-centred implementation is core to delivering on AI opportunities in a way which provides genuine value, as even the most sophisticated tool will fail to provide benefits if they don’t reach users and what they need.
The frameworks and approaches outlined here aim to support this shift from technology-focused to people-focused innovation. EVR’s 4D framework helps schools consider the many concurrent threads of successful implementation:
Governance structures must include teacher voice at every level, from policy development to tool selection, creating feedback loops that capture classroom realities.
Iterative evaluation means testing AI implementations in real classroom contexts with willing teachers, using their insights to refine approaches before wider rollout. Bringing in co-design techniques – and so creating the space for continuous exploration and discussion – will help engage teachers in this process from the start.
Technology infrastructure decisions should consider teacher workload and classroom practicalities, not just IT efficiency.
And staff capability development requires investment in comprehensive professional development that addresses both technical skills and pedagogical transformation, recognising that teachers need time and support to adapt.
A simple starting point could include a review of existing case studies in education, while starting the conversation with your teachers and staff around their current use of AI and their hopes and fears for the technology. Both threads will help you build a picture of potential opportunities specific to your space.
Do also consider sharing your process and learnings with the wider community where you can – transparency across the sector will help strengthen everyone’s approach.