Stop Guessing – Start Iterating

We need to hold each other accountable and keep asking the same question throughout delivery: do we know this is needed, or are we assuming it is?
If you follow the dxw blog you’ll see that I’m not alone in challenging the Service Standard phases in digital delivery. In previous posts we’ve looked at how the existing phases prevent teams working in a truly agile way.
And if you’ve worked with me for any length of time, you’ll have heard me come back to the same drumbeat: iterate early, iterate often. This isn’t a stylistic preference or a nod to agile orthodoxy. It’s a response to how complex systems actually behave in the real world.
What I mean by iterative development
At its simplest, iterative development is about getting a thin, working version of a service into real use as early as possible, then improving it in small, deliberate steps.
Not a big bang delivery or months of speculative design.
Instead:
- build something small but real
- put it in front of users (safely)
- learn from how it behaves
- improve it
- repeat
It’s less like constructing a cathedral from blueprints, and more like feeling your way across stepping stones in a river, placing the next one only when you know where your foot is landing.
Finding the unknown unknowns
The biggest reason I push this approach is simple: we don’t know what we don’t know.
And those unknowns are not edge cases. They are often:
- subtle data inconsistencies that quietly break assumptions
- integration behaviours that don’t match documentation
- user journeys that look correct but collapse under real-world usage
- or, occasionally, full-on sinkholes that force a rethink of the solution
These aren’t hypothetical risks. They happen regularly and we can’t reliably discover them through upfront design or in test environments alone, we can only uncover them when:
- real data flows through the system
- at real volume
- across real scenarios
Iterative development is how we surface those risks early, when they’re still cheap to fix.
Delivering based on our contract
There’s also a practical reality here: we’re contracted to deliver value, not just intent.
Iteration allows us to:
- demonstrate progress earlier
- validate that we’re building the right thing
- adapt to evolving understanding without derailing delivery
Rather than committing everything upfront and hoping it holds, we continuously align what we’re building with what the service actually needs to be.
Learning faster and making better decisions
Iteration isn’t just about reducing risk. It’s about learning faster.
When we combine direct user feedback, behavioural data and service metrics, we move from assumptions to evidence.
Private beta is designed exactly for this. It gives us a safe space to:
- observe real usage
- validate decisions
- adjust quickly
Instead of debating what might work, we can see what actually does. Unfortunately at the moment we often run the risk of taking too long to get into private beta, which then turns it into more of a rollout exercise rather than a learning opportunity .
Simplicity builds confidence
There’s also a quieter benefit: simpler systems are easier to understand.
When we build in smaller increments, we:
- know exactly what we’ve built
- understand how users are interacting with it
- can reason about changes with more confidence
Complexity tends to creep in when we try to solve too much too early. Iteration keeps things grounded.
We often overthink the beginning
One pattern I see repeatedly (and am guilty of) is over-engineering at the start.
We try to:
- anticipate every edge case
- design for every future scenario
- build flexibility we may never need
In doing so, we:
- increase complexity
- slow down delivery
- and still miss the things that actually matter
We also sometimes underestimate users.
In reality:
- users understand more than we expect
- they adapt quickly
- and they will absolutely use the service in ways we didn’t predict
Iteration embraces that reality instead of fighting it.
Examples and consequences
These aren’t abstract risks. We’ve seen them play out in real projects.
Metrics
When working on a new service for a large public sector organisation we uncovered a critical issue in a metrics pipeline very late in delivery:
- a feature we were deprecating from the case management system was still being used for key reporting
- knowledge of this dependency effectively sat with a single individual
- removing it would have broken important reporting flows
The result:
- we had to rapidly design and implement new integrations
- this work landed roughly a week before launch
This wasn’t a failure of effort or capability. It was a hidden dependency that only surfaced when the system was close to real-world use.
Data quality
We were unable to fully test a service with real users and real data until private beta. Once we did, issues appeared quickly:
- data didn’t behave as expected under real usage
- assumptions made during development didn’t hold
- the service began to struggle under realistic scenarios
The result:
- we had to pivot quickly and split the service in two
- this introduced significant delivery and commercial impact
- it also affected client confidence
This is a clear example of what happens when reality arrives late.
Using a more iterative approach
More recently, we’ve seen the opposite pattern. Through a more iterative approach, we were able to surface issues earlier. We identified integration problems through hands-on, manual testing. For example, the inability to post secondary updates after changing a core data point which directly impacted a key user workflow.
The difference this time:
- we caught the issue early enough to respond without major disruption
- the team had space to adapt the approach
- the impact on delivery was controlled rather than reactive
- this still happened later than is comfortable and can lead to awkward conversations on large changes in direction
Across all of these, the pattern is consistent:
- when we delay exposure to real data and real usage, problems arrive late and hit hard
- when we iterate early, the same problems still exist, but we meet them on our terms
Final thought
Iterative development isn’t about being cautious or incremental for the sake of it. It’s about exposing reality as early as possible.
Because the biggest risks in our work aren’t the things we can see. They’re the ones quietly waiting underneath, only revealed when the system meets the real world.
And it’s also worth saying I’m not immune to this. I can absolutely overthink things and get carried away designing for scenarios that may never happen. This is as much a reminder to myself as it is a principle I push with others.
This only works if we do it together. We need to hold each other accountable and keep asking the same simple question throughout delivery: do we know this is needed, or are we assuming it is?
The sooner we expose our work to reality, the sooner we stop guessing and start building the right thing.