Task ease metric to focus usability improvements

A prompt: Overall, how difficult or easy did you find this task? with a rating scale from 1 very difficult to 7 very easy.

Gathering task ease ratings in usability testing showed which tasks are difficult. Participants also commented on why they gave those task ratings. The Single Ease Question (SEQ) proved valuable for collecting task-level insights and focusing on usability improvements.

Why measure task ease #

Task ease is a key metric for evaluating user experience1. Ratings can be compared relative to one another or with a benchmark. Learning which tasks are more difficult helps focus efforts on improving key journeys and the overall user experience.

The Single Ease Question #

While I have not tried other post-task questionnaires, I found the SEQ to be short, easy for participants to respond to, easy to administer, and easy to score2. Each of these factors is critical for measurement.

A task ease question should be short. Participants’ time is precious in think-aloud usability testing, and there are several tasks to attempt.

The SEQ works well because it is “captured directly after a user has attempted a task.” Providing a scale also makes it easy for participants to gauge whether a task was extreme or near the midpoint.

As “you can administer the SEQ in any questionnaire software, on paper or aurally”3, inserting the prompt makes it easy for a facilitator to bookend a task before starting the next one.

During usability testing sessions, I prompted participants with the SEQ after each task attempt.

A prompt: Overall, how difficult or easy did you find this task? with a rating scale from 1 very difficult to 7 very easy.
The prompt I show participants while verbally asking: “Overall, how difficult or easy did you find this task? On a scale from 1 to 7 where 1 is very difficult, to 7 is very easy.”

Task ease ratings #

After three rounds of usability testing sessions, I averaged the ease ratings for 5 tasks.

Vertical bar graph. Refer to image description below.
A bar graph that measures ease ratings for five tasks. The data is summarized in the following table:
TaskABCDE
Ease rating5.44.55.74.95.2
Task B had the lowest average rating, at 4.5. Task D had the second-lowest score at 4.9.

The average ease ratings range from 4.5 to 5.7. Tasks A and C received easier ratings of 5.4 and 5.7 respectively. Task B with an average of 4.5 was perceived as the most difficult task.

Comparing task ease ratings with a benchmark #

The average SEQ score is around 5.5 … above the nominal midpoint of 4 but is typical for 7-point scales.

Jeff Sauro, PhD. “10 Things To Know About The Single Ease Question (SEQ).” MeasuringU, measuringu.com/seq10/

Although this SEQ average is non-industry specific4, it helps to understand how our product task ratings stack up. How does our data compare with this benchmark?

TaskABCDE
Ease rating5.44.55.74.95.2
Deviation from a benchmark of 5.5-0.1-1.0+0.2-0.6-0.3
Task B had the lowest average rating at 4.5, and the greatest deviation from the benchmark SEQ score.

I reported these task ease ratings and their deviance from the benchmark. Reporting these metrics helped stakeholders focus on my recommendations for improving Task B’s usability.

Report task ease together with red route priorities #

Ease ratings provide perceptions for tested tasks. Once your product’s top tasks5 or red routes6 have been identified, combine them with task ease ratings to clarify priority improvements.

Summing up #

The Single Ease Question (SEQ) is a useful perception metric tool and a rich diagnostic instrument. In addition to the self-reported quantitative ratings, the SEQ uncovers qualitative feedback on why some tasks are more difficult. This makes it easy to discover problems and recommend solutions to improve usability. Reporting insights from the analysis of SEQ together with top tasks or red routes empowers product owners to decide on improvement priorities.

Footnotes #

  1. Turner, N. (2018). 6 key UX metrics to focus on. https://neilturneruxm.medium.com/5-key-ux-metrics-to-focus-on-62656873b4c3 ↩︎
  2. Jeff Sauro, P. (2010). If You Could Only Ask One Question, Use This One. https://measuringu.com/single-question/ ↩︎
  3. Jeff Sauro, P. (2012). 10 Things To Know About The Single Ease Question (SEQ). https://measuringu.com/seq10/ ↩︎
  4. Maddie Brown. (2025). SEQ vs. SUS (Video). https://www.nngroup.com/videos/seq-vs-sus/ ↩︎
  5. McGovern, G. (2022). Top Tasks: To Focus On What Matters You Must De-Focus On What Doesn’t. https://www.smashingmagazine.com/2022/05/top-tasks-focus-what-matters-must-defocus-what-doesnt/ ↩︎
  6. Travis, D. (2014). How red routes can help you take charge of your product backlog. https://www.userfocus.co.uk/articles/prioritising-functions.html ↩︎

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.