Reflections on AI from a researcher

It’s important as user centred design professionals that we use our voices to influence the direction AI takes
At the moment you can’t avoid the topic of AI, and the push to incorporate it into our ways of working is getting louder. Given this isn’t likely to go away, it’s important as user centred design professionals that we use our voices to influence the direction it takes.
I wanted to share my perspective as a researcher and someone who’s at the more cautious end of the spectrum. I’ll cover:
- What makes me nervous about AI
- What we can draw from previous technology changes
- How we can be more user centred in our adoption of AI
What makes me nervous about AI?
As a researcher, I naturally think about the human impact, and there are a few things that make me nervous:
- The driver for AI is often related to “do more things”, “do things faster” and “do things with less people”. But more things, with less people, faster, doesn’t mean better.
- Many applications of AI are in the creative space, but creativity is so important for us to express ourselves.
- The environmental impact of AI will be huge if not managed appropriately, which in turn impacts us as humans.
- We’ve seen firsthand how polarising social media has been on mental health. AI has the power to exacerbate these challenges.
If AI is here to stay, it’s important to create a culture where we can share our hesitations and make sure it’s implemented in a way we feel more comfortable with.
What can we draw from previous technology shifts?
This isn’t the first time we’ve seen technology dramatically impact how we work and live. But it’s happening at a faster speed and greater scale. So what insights can we draw from previous technology changes?
12 years ago I started my career in Human Factors in the railway sector. I worked on projects exploring the impact of moving from physical to digital touchpoints and implementing greater levels of automation. The aim of these shifts was to run more trains closer together and have more flexibility to manage incidents. It resulted in the following impact for railway staff:
- Job satisfaction: Signallers and train drivers saw their job as a craft and were extremely passionate about controlling the railway with traditional signalling systems. Moving to having tasks replaced by software had a huge impact on how they felt about their jobs.
- Workload, situational awareness and skill loss: Increasing levels of automation meant that, for most of the day, signallers and train drivers workload was low. Low workload often correlates to boredom and loss of situational awareness. So when an incident occurred, they would have to quickly understand the context and their workload suddenly peaked. This combined with skill loss from not doing certain tasks increased the risk of making mistakes.
- Trust: It was important for signallers and drivers to understand the algorithms behind software decision-making, so they could trust the actions it was taking. Ultimately they were still responsible for keeping the railways safe.
- Job security: The increase in automation meant signallers were able to control larger areas of railway. There was a shift to consolidating and centralising control rooms, and this created a fear of job losses.
- Change management: There was understandable resistance to these changes, and bringing railway staff on a change journey was really important. Human Factors and Business Change went hand in hand. From identifying change champions to get other colleagues involved in research, so they felt listened to; to training programmes and communication plans so they understood which changes were happening and when.
How can we be more user centred in our adoption of AI?
We’ve recently finished an experiment building and testing AI assistants to help public sector staff respond to operational guidance queries.
During this project we saw similar themes to those in my railway automation days:
- Staff had built up years of operational knowledge. As well as pride in this, there was a fear that AI wouldn’t understand all of the context they had and be unable to answer queries accurately.
- Staff didn’t want AI assistants to make decisions for them. They wanted AI to help them find relevant references in operational guidance and draft responses that they could build on.
- Staff wanted to understand what the data sources were and how they worked to increase trust in its responses.
- There was a nervousness for AI assistants to be used outside of operational guidance experts, in case responses generated by AI were misinterpreted and inappropriate decisions made.
- There was a mixture of readiness and feelings towards AI. Some staff were open to using AI and had an idea of how they could use it day-to-day. Others were more nervous about its impacts. Then, there was the group in the middle, open to using AI but not quite sure yet how best to.
Throughout this project, we took the same design approach that we always do when solving complex problems for humans. We understand the problem space, brainstorm different solutions, experiment to test our assumptions and include users at each step of the process.
If you’re running a project exploring AI, here are some basic principles for ensuring a user centred approach:
- Discovery: Start by understanding what problems your users or organisation have and prioritise which ones you want to solve.
- Ideation: Don’t automatically assume AI can solve every problem, and spend time identifying different solutions. If you jump straight to an AI solution, you risk implementing something that might not actually solve the problem.
- Experimentation: If you think AI can solve your problem, run experiments to test whether or not this is true. Using an effort versus value matrix can help prioritise which experiments to focus on. When reviewing the outcome of your experiments, don’t just focus on measuring impact on time. Other measures like comprehension, accuracy and usefulness are important too.
- Evaluation: If you discover AI can solve your problem, zoom out and ask should it? How will it impact processes? How much will it cost? What’s the environmental impact? What are the ethical or reputational risks? Do you have an AI strategy and does this align with it?
Final thoughts
We all react to change differently – based on past experiences of change, perceived loss of control or autonomy, grieving how we used to do something, or a fear of the unknown for the future.
We must be sensitive to this when working on projects adopting AI. Giving people greater control of when and how to use AI can help to increase adoption. Listening to people’s concerns creates a culture where the risks of using AI are more likely to be designed out.
As user centred design professionals, we have the power to steer the conversation from being just about “doing more things, faster, with less people”. We can ensure AI solves problems for people and organisations and enhances experience, quality and outcomes. As well as ensuring AI is implemented ethically to reduce negative consequences for humans and the planet.
If you’re nervous about AI, or not sure how to approach adoption in a user centred way, get in touch. We’re working on lots of different AI projects and would love to hear your thoughts and experiences.