Patient Montior Comparative Usability Evaluation

UX Researcher | Usability Testing | 2018-2019
Healthcare Human Factors was asked to conduct a comparative usability evaluation of two patient monitoring systems. I helped plan, conduct, and analyze data for 28 usability sessions. Results were used to inform a multi-hospital purchasing decision.

timeline

6 months

skills

Usability evaluation, user interviews, ethnography

role

UX Researcher

team

2 UX Researchers, 1 intern, and 1 PM, Procurement Project Team

Background

Patient monitors are are used to continuously monitor patient vital signs such as ECG, heart rate, blood oxygen, and blood pressure. Using this information, clinical staff can better evaluate a patient's condition and make appropriate treatment decisions.

The medical technology procurement process is important in facilitating patient safety related to medical device use. More specifically, systems that are unintuitive and unsuitable for clinician workflows can result in errors and ultimately cause patient harm.

Problem

In 2019, my team at Healthcare Human Factors (HHF) was asked by a hospital network to conduct a comparative usability evaluation of two patient monitoring systems. Results would help multiple hospitals decide which patient monitor system to purchase.

Six Evaluation Components

Our team began by breaking down the evaluation into six main components:

1—Define Scope

We kicked off the project by facilitating a meeting with the entire procurement team. This included clinicians, engineers, technicians, and PMs. Together we defined the project scope: 

2—Develop Scenarios

In order to identify tasks to include in the evaluation, my team shadowed clinicians throughout the hospital, and observed how they interacted with patient monitors. In addition, we conducted user interviews where we asked clinicians what tasks were essential to their work with patient monitors. We identified 5 main scenarios.

3—Recruit Participants

A total of 28 participants were recruited for the evaluation. Participants were a mix of ICU nurses, telemetry nurses, and anesthesiologists from various units across the hospital, and used patient monitors at least once per week.

4—Prepare for Testing

Three main test environments were identified: operating room, ICU, telemetry unit. The testing labs were setup to resemble these environments. In addition, ambient noise was played in the background to add to realism.

5—Evaluate the Device

Each test session had one participant and included orientation, training, usability testing, debrief interviews, and questionnaires. My colleague and I alternated being the facilitator and note taker. The facilitator acted as a nurse and was in the same room as the participant while the note taker observed remotely from another room. To reduce recency bias, the order that systems were tested in was counterbalanced.

Both quantitative and qualitative data relating to usability was collected.

Quantitative Data

Qualitative Data

  • Number of critical, serious, and moderate usability issues
  • Post-Study System Usability Questionnaire (PSSUQ)
  • Task Load Index (NASA-TLX)
  • Direct comparison survey (e.g. Which system do you prefer overall?)
  • Observed critical, serious, and moderate usability issues
  • Subjective feedback from participants (e.g. Did you have any difficulty using the system? Was anything confusing?)
  • Subjective feedback on how issues affect patient safety

6—Analyze and Report

Following testing, we synthesized data from all 28 sessions. Quantitative data was used to calculate usability scores for each system, and qualitative data was used to understand the severity of the scores as it relates to patient safety.

Findings and recommendations were shared with the procurement team through a 1-hour presentation and 130-page report.

2020 HFES Symposium Poster

Specific findings from the evaluation are under and NDA and thus cannot be disclosed. However, my team prepared a poster for the 2020 HFES Symposium which compared usability findings from this study to a similar study conducted by the HHF team in 2009. The poster highlights usability issues that were common in both studies.

Click here to see the PDF version. I'd love to get in touch to discuss our testing methodology further!

Reflections & Learnings

As my first time conducting a comparative usability study of this scale. I made many mistakes, however, these mistakes made me a better test facilitator, allowing me to successfully execute two subsequent procurement studies.

Some of my learnings are as follows:

first project→

Designing an Online Marketplace