Helping clinical admins to produce reports on national cervical screening

A summary of how the cervical screening system enquiry tool was designed.

Tosin Balogun
9 min readMar 17, 2022
There are ten squares on the screen, set in a parallel shape.

To protect the population from developing cervical cancer, clinicians must ensure they screen people of eligible age every 3 or 5 years.

This usually involves:

  • Identifying those who are due
  • Inviting them
  • Screening them
  • Sending them their test results
  • Reinviting them for a follow up appointment depending on their test result

All these activities need a lot of data coordination across different clinical environments.

Clinical administrators (the users) need to help monitor the process and investigate any incidents. They create investigative reports for stakeholders and quality assurance teams. These reports help mitigate clinical risk for patients across the country.

Some of these reports are extracted from a reporting tool known as Cervical Screening System Enquiry tool or CSSE tool.

The Problem

The current cervical screening system had been in use since the 1990s. It was beginning to show its age, thus a replacement platform was commissioned.

The CSSE tool was also notorious for being inconsistent and difficult to use.

Early user research told us users often struggle to produce reports, sometimes being unable to produce anything at all. The tool never informs the user if they were adding the right or wrong thing.

This meant the user interaction involved a lot of trial and error. The users had to tumble through many mistakes to figure out what each field does. Once they found the fields that provided what they needed, they reused them to produce other related enquiries. This is because they had confidence it would produce something similar at a minimum.

Our challenge was then how to make it easy for users to produce reports confidently and consistently.

The solution

To solve the underlying problems, the plan was to:

1. Understand the as-is situation better

2. Develop solutions to test with users

3. Iterate the solution based on user feedback till we get it right

1. Understanding the as-is

Getting a good picture of the situation involved:

  • Reading the discovery user research to understand the pain points
  • Reading the guidance manual that came with the CSSE tool to understand how it was meant to work
  • Interviewing the user to hear and see how they interacted with the tool presently

We learned how the process worked on a high-level. The user would get an ad hoc request from stakeholders (usually from NHS England or quality assurance teams). They translate the request into the report needed from the tool. They log in and start fiddling around till they get it right. The user described the experience as ‘demanding on the brain’.

The user could interact with approximately 48 data fields, none of which is guaranteed to produce anything. In cases where the user cannot produce the report, they make an informal request to a data engineering team within NHS Digital as a fall-back option.

A high level view of the process

Due to the inconsistency of the tool, the users often reuse details of enquiries they already made which match the intent of the report they need. This is because they trust it to produce something similar at a minimum.

To get the right enquiry, the user scrolls through a long list of their enquiry history, they then inspect ones they think matches the report intent. Once they find one, they write down the index number to remember where it was on the list.

The user opens the details of the existing enquiry side-by-side a new enquiry form. Then the user begins to copy the details with some tweaks.

A story board showing user behaviour and sentiment

We also learned there was opportunity to make it easier for novice users and stakeholders to use the tool. This could ease the request burden on the current primary users and reduce chances for single points of failure within the system.

The summary journey map based on user research insight

2. Developing solutions

Once we understood the context of use and the problems involved, we began exploring ideas of how to solve them. This involved:

  • Doing a data field audit alongside our business analyst and technical architect to ensure each field is set for remapping, and that we understood what data it would be returning. This gave us a good idea of how to regroup the fields and identify which fields were redundant.
  • Creating user flow diagram based on insights from the remapped data fields
  • Creating sketch designs of the user interface based on the user flow to test

The user interface went through 3 design sprint iterations, which we tested with 10 users (4 primary users, 2 novice users and 4 stakeholders). The design involved testing journeys to create a new enquiry, find, edit, or duplicate an existing one and abandoning an enquiry midway.

3. Iterate the solution

Design version 1

In the first iteration, the focus was on remapping the data fields in a way that reduces the burden on the user. To achieve that, the flow of the screens was re-organised. It included branching logic so the user could bypass some fields based on answers they provide earlier in the journey.

Skip logic to help users get to fields they need quicker

We added search feature to help users find existing enquiries faster, rather than scrolling through a long list to snipe out the one they need.

Search function to help users find old enquiries faster

Our design needed to help the user identify relationships between fields if one existed. We achieved this through proximity grouping or nesting, to reduce users guessing what the fields were for or related to.

The old tool was not clear in when there is a relationship between fields
The new design used proximity and nesting to show relationship between fields if they exist

We introduced well tested components from the NHS design system and updated the content to be clear in what it’s asking the user to do.

When we tested the new design with users, we learned:

  • Both experience and novice users were able to use the tool effectively. They understood the content and field groupings
  • Users struggled with the new flow of screens to create new enquiries due to it being radically different from what they were used to. This was especially true for novice users and some experienced users
  • Some users did not know how to identify screening health codes, thus asked for reference sheets to help them
  • Users appreciated using search to find existing enquiries quickly. But they did not know how to interact with it because it’s different from their current mental model of looking through a long list
  • Some users didn’t want to navigate through all screens in the journey again if they wanted to make small changes to existing enquiries

Design version 2

We took the feedback from the usability testing and began to refine the design. We decided to change the flow of screens to be more like the as-is tool while also keeping the new content groupings. This was to help users onboard into the new tool using their current mental model.

We added an accessible auto-suggest feature to help users find screening health codes quickly. This allowed users to find them by typing either part or full characters, thus removing the need for users to rely on memory or reference sheets.

User can type part or full characters to find screening health codes

We added an enquiry summary page where users can see details of what they have selected before submitting. It was to also serve as a ‘zig zag’ navigation hub for when users wanted to reuse the details for a duplicate enquiry.

Summary page where users can review what they have entered before choosing to proceed

We changed the search journey to be a blend of the old and new. Unlike the old long list, we reduced things to feature only 4 results per page, sorted by most recently created enquiries at the top. We added pagination so the user can browse more existing enquiries, and we used the search option as a filter if the user wanted to narrow things down. This combined setup was to help reduce the cognitive load on the user.

User can find old enquiries by a call to action where they can further refine by search

When we tested the revised design with users, we learned:

  • Users felt at home with the revised flow of screens and thought the content groupings made sense. A user told us they recognised their bias to the as-is tool since they had used it for years, but thought the new groupings made sense.
  • Users responded well to the auto-suggest feature. They were able to use it to identify screening health codes they needed quickly. They found it helpful to be able to search by name rather than code. One user responded on seeing the feature “Ooh, that’s good, I like that”.
  • Users knew they could refine the list of existing enquiries by toggling the search function

Users rated the total experience as positive. The only feedback was about some content wordings, and a certain section where users expected cascading data reveal.

Design version 3

The second round of usability testing gave us high confidence that the design solved a lot of the original pain points. The team opted not to take it through another round of testing, choosing to instead use the feedback to refine the final design.

We addressed the data cascading expectation using branch logic. This is where we route users down their desired path based on specific answers they selected.

Branch logic to help route users to fields they need

We also pre-selected a lot of fields where possible to reduce users feeling like they must interact with everything.

Pre-selecting most fields to reduce user having to interact if they don’t need to

The outcome

The final design has been packaged alongside insights from research documentation. Both has formed the minimum viable product (MVP) for developers to build.

The MVP design now had potentially 32 fields the user could interact with compared to the 48 from the as-is tool.

The new tool will also guide the user to ensure they produce what they intended compared to the old one which didn’t.

We have high confidence that the design solution will meet the users’ expectations, and it will also reduce a lot of their original pain points.

Once the MVP is built, we will learn from how users interact with it in their environment. The feedback we get will help us iterate the design further.

What did I learn?

This project is probably the hardest one I have designed so far, and that came with a lot of learning.

I learned what it means to lead a design sprint from start to finish while things often changed around you.

I learned that clinical professionals have traits that differ from citizen users, so our design should reflect that.

Finally, I learned that my design team are an awesome crew :D.

This project was challenging, and I couldn’t have delivered it without the wonderful support from my team who went into the dragons’ den with me.

So, I want to give a hearty shout out to our business analyst Will Hayward, our content designer Kirsty Brown, our user researcher Emma Holmes and my design mentor Becca Gorton for being by my side while we battled the dragon.

A celebratory image of my design team

Reference

1. Editorial (2002) The Exeter system is central to general practice, Guidelines in Practice (online) The Exeter system is central to general practice | Hot topic | Guidelines in Practice

--

--

Tosin Balogun

I’m learning my place in the world and how to make it better. Currently working at NHS Digital as a UX Designer