Danny Hearn – Deeply Human Design Ltd

Rebuilding a UK Gov dept's Intranet

I helped create the confidence department for transport needed to move forward into Beta—by shaping a robust hosting platform, developing user-centred design, and ensuring voices from people with a wide range of accessibility needs were heard throughout the process.

Client
Dept for Transport

Design agency
Methods

Duration
3 months (Alpha phase)

My role
Lead User Researcher

Wider team make up
User Researcher, Product Manager, Service Designer,  Delivery Manager, Developers, Business Analyst, and Content Designer.

Research Methods
User Research,  Generating Insights, User Testing,  Surveys, Client presentations

My impact

Before

Team under scrutiny and pressure to create momentum, while the team lacked clarity on user needs and design workflow.

What I did

Established research ops, planned and conducted cadence of user research while supporting senior project team with focus and problem solving.

Results

Danny is a passionate, intelligent, and hardworking team member with a wide range of UCD expertise. The excellent work he did to set up a large, diverse user panel at DfT will have a long term impact, helping us and the client carry out quality research for a long time to come. Thank you Danny!
Methods (for Dept for Transport)

Paul Tait

Senior Delivery Manager

Methods (for Dept for Transport)

The challenge

  • The Department for Transport’s intranet was outdated and failing, with the current hosting platform license due to expire next year.
  • The discovery phase had not selected which platform to rebuild on, meaning we would have to conduct comparative analysis between two platforms.
  • The client was growing impatient with project progress and the newly-formed team wasn’t aligned or experienced working together
Current DfT Intranet

Alpha goals

  • Determine the best intranet hosting platform to use in terms of end user experience using minimal dev effort or customisations for key journeys.
  • Provide insights into the experience of the journeys to inform what customisations may be needed.
  • Develop a validated set of user needs and cohorts to inform the design and beta stage.

Roles and hats

Research & testing

Planning tests, creating discussion guides, developing outreach and sourcing participants for research, facilitating 1-2-1 interviews and tests.

Synthesis and reporting

Generating insights, user needs, personas, presenting to stakeholders across DfT,

  • Weekly sprint playbacks with core team featuring insight synthesis.
  • Bi-weekly steering committee presentations with data-driven evidence.
  • Weekly interactive reflection sessions with design team.
  • Ad-hoc stakeholder updates as critical insights that impacted the core project goals emerged


Project supporter

Providing guidance to project delivery manager, project manager and product owner around AGILE process, team rituals, AGILE tooling, design and wider project strategy.

The first week

Tension in the team

Joining the project on the 2nd sprint I could immediately see a pressure for research activity due to client confidence in the overall project dipping following a sluggish kick off. 

A lack of team alignment

The team’s 3 “hypotheses” were actually just work statements, not testable assumptions—no wonder there was confusion!

My immediate priorities:

  1. Rebuild stakeholder confidence with clear and robust communication to manage expectations and make it explicit what we needed.
  2. Convene the team to try and understand the research goals, objectives and questions in order to create a discussion guide.
  3. Establish research Ops in order to minimise effort and build the UR presence in the project.
  4. Support in reframing hypothesis using mad-lib statements.

Research ops

Establishing a sustainable cadence of ongoing research, collaboration.

User research project panel

This was established by inviting participants to complete a screener and GDPR consent form.  Once signed up we soon had a panel of willing participants ready to test with.  The panel of people grew each week as more people became aware of it.


Harnessing MS Office tooling and automations

By using MS Forms and MS booking I setup both a GDPR, screener process that then invited people to book a slot using the portal.  This simply process was a massive efficiency saver and meant we could very quickly reach out to participants for future research sessions.


Getting our work out in the open

Utilising flexible discussion guides in Mural, that enabled the team to participate, and collaborate in during testing as well as weekly team UR reflection sessions to enable the team to dig deeper into the insights and discuss what we might do next.

MS form to invite participants to panel
Research session booking form

Generating insights

Establishing a sustainable cadence of ongoing research, collaboration.

Each sprint focused on a different journey (part of the breadth over depth strategy).  The journeys covered searching, content editing and publication and browsing for specific content.

We leaned alot on 1-2-1 interviews and user testing of prototypes.  This enabled us to slowly build a picture of our user needs while also giving insights into specific journeys.  In each instance we created scenarios for users to complete by using the prototypes created by the design team. 

Quailitive research process

Plan & outreach

Post sprint kickoff session to understand our broad goals and start recruitment for user interviews.  As the design iterated their design we refined the discussion guide and tweaked with team.

Excerpt of mural discussion guide

Conduct research

We carried out research sessions over the course of 1-2 days, collating notes on Mural boards embedded with the discussion guide and relevent screenshots.

Review & synthesize

  • Share anonymised interview transcripts with AI to draw out patterns, themes, and quotes
  • Review own notes from interviews, cluster by theme
Synthesised insight cards

Report and share

Highlight top level issues and present back narrative to stakeholders and wider team.

Excerpt from weekly playback

A focus on accessibility

We recognised the importance of inclusion and making sure our research represented a diverse range of people, including those with different accessibility needs. To achieve this, we added specific questions to our screener to identify accessibility requirements, including neurodiversity, and actively reached out to a variety of groups and communities. As a result, we achieved good coverage of users with a broad range of needs.

Summary of accessible needs from the people reached by the research

A comparative anaylsis

During each user test we felt it was helpful to present quantifiable evidence of how the two platforms compared, that steering committee could confidently use to justify platform selection and budget allocation.

Below are a range of metrics we collected : 

SUS scores: Measured usability perception for both platforms (standardized comparison) 

Task completion rates: Measured user success with key workflows (navigation effectiveness)
Time on task: Measured efficiency of content creation and finding (productivity impact)
Accessibility compliance: Measured barrier-free access (inclusive design requirements)
User satisfaction ratings: Measured emotional response to publishing workflows (adoption likelihood)

Capturing the SUS scores at end of interview
Final SUS summary scores from user research session

Insights that influence

There were two key moments in the project where the research we produced directly influenced key decsions in the project

Determining which platform

The comparative anaylsis of the out theb box experience for the end user became so complelling with reguards to one platform seemingly performing stronger, we were able to present the SUS scores on a single slide, which became the moment of truth. It was this slide that was cited in stakeholder discussions as a key factor in the decsion to pick a platform.

Access for external users

It was during the interviews we conducted that we surfaced how certain groups of users were challenged with access to due to complex authentication hurdles.  It was by us raising this that it became a central point of discussion and communication talking point to reassure stakeholders that this issue would be resolve in the final release.

Surveys

To help broaden our reach of understanding and participants we developed two surveys to general users and content authors of the intranet.  We added two key questions; What user need is most important to you and which is most challenging. This was valuable data to help inform beta thinking for and prioritisation of effort.  

The survey also captured qualitative information as well as opting more people to join the UR panel.

Core intranet usage JTBD statements ranked results from survey

Developing high level needs

During our research, we spotted a lot of overlap between the initial discovery personas, so we merged and simplified them into three clearer cohorts with distinct user needs. We also ranked the most common tasks using survey data, and turned all the main pain points and needs into clear user need statements.

Slide communicating the merging of lots of personas into 3 groups
Content publishers persona
External agencies persona
General users persona

Final reflections

Looking back, I wish I’d invested more time early on building relationships and trust – those collaborative partnerships make everything smoother, especially when you’re trying to align on what actually needs testing versus what’s just interesting to explore.  It’s something i’ve reflected on before so I felt silly for not recognising it’s signifacance straight away .

I also found myself wearing a lot of hats (researcher, strategist, facilitator) which sometimes made it harder for other the team to feel comfortable as I switched roles. Sometimes the team was looking to research to solve problems that weren’t really research problems, and I could have been more direct about that earlier.