I helped create the confidence department for transport needed to move forward into Beta—by shaping a robust hosting platform, developing user-centred design, and ensuring voices from people with a wide range of accessibility needs were heard throughout the process.
Client
Dept for Transport
Design agency
Methods
Duration
3 months (Alpha phase)
My role
Lead User Researcher
Wider team make up
User Researcher, Product Manager, Service Designer, Delivery Manager, Developers, Business Analyst, and Content Designer.
Research Methods
User Research, Generating Insights, User Testing, Surveys, Client presentations
Team under scrutiny and pressure to create momentum, while the team lacked clarity on user needs and design workflow.
Established research ops, planned and conducted cadence of user research while supporting senior project team with focus and problem solving.
Research & testing
Planning tests, creating discussion guides, developing outreach and sourcing participants for research, facilitating 1-2-1 interviews and tests.
Synthesis and reporting
Generating insights, user needs, personas, presenting to stakeholders across DfT,
Project supporter
Providing guidance to project delivery manager, project manager and product owner around AGILE process, team rituals, AGILE tooling, design and wider project strategy.
Tension in the team
Joining the project on the 2nd sprint I could immediately see a pressure for research activity due to client confidence in the overall project dipping following a sluggish kick off.
A lack of team alignment
The team’s 3 “hypotheses” were actually just work statements, not testable assumptions—no wonder there was confusion!
My immediate priorities:
Establishing a sustainable cadence of ongoing research, collaboration.
User research project panel
This was established by inviting participants to complete a screener and GDPR consent form. Once signed up we soon had a panel of willing participants ready to test with. The panel of people grew each week as more people became aware of it.
Harnessing MS Office tooling and automations
By using MS Forms and MS booking I setup both a GDPR, screener process that then invited people to book a slot using the portal. This simply process was a massive efficiency saver and meant we could very quickly reach out to participants for future research sessions.
Getting our work out in the open
Utilising flexible discussion guides in Mural, that enabled the team to participate, and collaborate in during testing as well as weekly team UR reflection sessions to enable the team to dig deeper into the insights and discuss what we might do next.
Establishing a sustainable cadence of ongoing research, collaboration.
Each sprint focused on a different journey (part of the breadth over depth strategy). The journeys covered searching, content editing and publication and browsing for specific content.
We leaned alot on 1-2-1 interviews and user testing of prototypes. This enabled us to slowly build a picture of our user needs while also giving insights into specific journeys. In each instance we created scenarios for users to complete by using the prototypes created by the design team.
Post sprint kickoff session to understand our broad goals and start recruitment for user interviews. As the design iterated their design we refined the discussion guide and tweaked with team.
We carried out research sessions over the course of 1-2 days, collating notes on Mural boards embedded with the discussion guide and relevent screenshots.
Highlight top level issues and present back narrative to stakeholders and wider team.
We recognised the importance of inclusion and making sure our research represented a diverse range of people, including those with different accessibility needs. To achieve this, we added specific questions to our screener to identify accessibility requirements, including neurodiversity, and actively reached out to a variety of groups and communities. As a result, we achieved good coverage of users with a broad range of needs.
During each user test we felt it was helpful to present quantifiable evidence of how the two platforms compared, that steering committee could confidently use to justify platform selection and budget allocation.
Below are a range of metrics we collected :
SUS scores: Measured usability perception for both platforms (standardized comparison)
Task completion rates: Measured user success with key workflows (navigation effectiveness)
Time on task: Measured efficiency of content creation and finding (productivity impact)
Accessibility compliance: Measured barrier-free access (inclusive design requirements)
User satisfaction ratings: Measured emotional response to publishing workflows (adoption likelihood)
There were two key moments in the project where the research we produced directly influenced key decsions in the project
Determining which platform
The comparative anaylsis of the out theb box experience for the end user became so complelling with reguards to one platform seemingly performing stronger, we were able to present the SUS scores on a single slide, which became the moment of truth. It was this slide that was cited in stakeholder discussions as a key factor in the decsion to pick a platform.
Access for external users
It was during the interviews we conducted that we surfaced how certain groups of users were challenged with access to due to complex authentication hurdles. It was by us raising this that it became a central point of discussion and communication talking point to reassure stakeholders that this issue would be resolve in the final release.
To help broaden our reach of understanding and participants we developed two surveys to general users and content authors of the intranet. We added two key questions; What user need is most important to you and which is most challenging. This was valuable data to help inform beta thinking for and prioritisation of effort.
The survey also captured qualitative information as well as opting more people to join the UR panel.
During our research, we spotted a lot of overlap between the initial discovery personas, so we merged and simplified them into three clearer cohorts with distinct user needs. We also ranked the most common tasks using survey data, and turned all the main pain points and needs into clear user need statements.
Looking back, I wish I’d invested more time early on building relationships and trust – those collaborative partnerships make everything smoother, especially when you’re trying to align on what actually needs testing versus what’s just interesting to explore. It’s something i’ve reflected on before so I felt silly for not recognising it’s signifacance straight away .
I also found myself wearing a lot of hats (researcher, strategist, facilitator) which sometimes made it harder for other the team to feel comfortable as I switched roles. Sometimes the team was looking to research to solve problems that weren’t really research problems, and I could have been more direct about that earlier.