In January 2018, I created a service blueprint exposing inefficient and painful internal processes of how the business delivers membership. I began my new role as Digital User Experience Manager in the Digital Experience team at State Library Victoria. My first task was to map the current experience of joining the Library, both from offsite and onsite.
Short of emotional data and user pain points, I relied on my heuristic review of the artifacts and backstage processes, as well as information gathered from internal sources including:
conversations with key colleagues
YouTube channel content
Google Analytics reports
Analysing and synthesising the data at hand, I created the following map, a service blueprint, not a customer journey map.
Where journey mapping focuses on exposing the end-to-end of your customer’s front stage experience, blueprinting focuses on exposing the surface-to-core of the business that makes up the backstage and behind the scenes of how you deliver and operate, and ties that to the customer’s experience.
Whether onsite or offsite, people either self-directed or were staff-mediated to start their join the Library task. Some people arrived via a Register link in the website masthead, others were directed to a dedicated computer at a staffed registration desk.
Users completed a web form that lacked optimisation for touch devices. It took 40 seconds longer to complete on a mobile compared to a desktop device! Thankfully the HTML and server-side technologies are bespoke. These frameworks give us the freedom to design, develop, test and iterate mobile form improvements and more over the following months.
One question on the form asked people how they’d like to receive their Library card: at the Library; or by post. The vast majority, 77%, chose to receive their card by post. Choosing by post means:
traveling to the Library is unnecessary
finding your way through a complex building to a registration desk is avoidable
interacting with staff is not required
By post has slower delivery and is a kind of self-service method of receiving your Library card. This option also required “great lengths to patch it together” in terms of intensive manual processing efforts.
At the Library
Only 23% chose to present identification to staff at the registration desk in order to receive their Library card. Staff would retrieve a plastic card from stock, make record adjustments in a computer system, then issue the card.
After removing spam registrations, and resolving incomplete or incorrect addresses, staff would:
Retrieve a plastic card;
Make record adjustments in a computer system;
Match the card with the corresponding named and addressed envelope;
Pack in a welcome letter.
Then a batch were taken to dispatch where, on business days, Australia Post would commence delivery.
Members who chose by post received their card and a generic welcome letter. No longer was the letter personalised. It wasn’t even customised: members residing outside Victoria got the same messages about accessing e-resources – a privilege non-Victorians are ineligible for!
Once people found their way to the registration desk, joining onsite only took a few minutes. For Victorians to receive their card in the post, it took between 2 to 6 days, then they could begin accessing member benefits.
After sharing the blueprint
This service blueprint indicates “inefficient and painful internal processes” surrounding postal delivery of member cards. Five months after sharing the blueprint, in June 2018, I lead a project which would bring about:
Satisfaction is measured through the question “Overall, how satisfied or dissatisfied are you with [product]?”
(Müller & Sedley, 2014)
For the [product] name we used Library Search. As Müller and Sedley recommend, I maintained the neutral wording of the question text, and used a 7-point scale “to optimize validity and reliability, while minimizing the respondents’ efforts”. However, the survey tool I chose doesn’t allow labeling every point on an opinion scale, only the mid-point and the polar extremes. It also numbers each point on an opinion scale. I accepted these differences to how the authors constructed their response scale.
Other best practices adhered to in my implementation include:
minimizing order bias by displaying scale items horizontally and with equal spacing
minimizing the effect of satisficing by labeling the midpoint “Neither satisfied nor dissatisfied” (instead of “Neutral”)
allowing for a more natural mapping to how respondents interpret biplor constructs by listing the negative extreme first
Same as in the authors case study, the satisfaction question is the only mandatory question. This ensures responses meet the primary objective, to be able to track changes in users’ attitudes and to associate those shifts to changes in the product.
Likely to recommend question – not included
Where there are competitors and alternatives, it may make sense for Google to ask “How likely are you to recommend [product] to a friend or colleague?” However it is redundant to ask a Net Promoter Score question in the context of users of a library discovery system.
Open-ended questions gather qualitative data
To gather qualitative data about users’ experiences with a given product, HaTS also includes two open-ended questions.
(Müller & Sedley, 2014)
Müller and Sedley found “that asking about experienced frustrations and needed new capabilities in the same question increased the response quantity … and quality … and minimized the analysis effort as compared to using two separate questions”. Our implementation substituted the word features instead of capabilities in effort towards plain language. The survey tool I chose only shows one line at a time, and has a less intuitive instruction of SHIFT + ENTER to make a line break. These are not ideal as they may have resulted in shorter responses than a large multi-line text box.
Adding “(Optional)” in the beginning of the question maintains the number of responses to these questions, and produces increased response quality.
To better identify opportunities, the question of frustrations is presented first before asking about areas of appreciation.
Satisfaction with specific tasks
“HaTS also assesses different components of the user experience, in particular, … satisfaction with product-specific tasks.”
(Müller & Sedley, 2014)
To ensure reliable satisfaction scoring, HaTS asks respondents to first select tasks they have attempted over the last month. To avoid response order effects, the order of tasks should be randomised across respondents.
Using logic flows, only selected tasks appear in the subsequent satisfaction scoring questions. The survey tool I chose doesn’t have a conditional grid matrix to score several tasks at once. Instead, each selected task is presented one at a time. This avoids overwhelming respondents.
Respondent characteristics – not included
HaTS can be configured to ask users “to self-report some of their characteristics in the context of using the product being measured.” For example, “In the last month, on about how many days have you used [product]?” Rather than directly asking respondents about their product usage, “it is preferred to pipe data directly into the survey database.” However, such data is lacking from products that don’t require authentication. At the time of implementation, authentication was not required to use Deakin Library Search. Besides, this was our first time collecting masses of data on arguably the most important tool in a digital library. Layering on user data felt overwhelming and overkill for an academic library.
Intercept survey tool
The tool I chose for our HaTS was Typeform. Unlike its competitors, Typeform has a more conversational feel, has easy to create logic jumps, and works great on every device.
However, while Typeform works great on mobile devices, configuring the invitation to display on the mobile view of Library Search proved impossible.
Invitation on search results page
A browser cookie records whether someone takes the survey or hides the invitation. This means they won’t see another invitation for 12 weeks in the same browser and device. This is the best we could do to avoid over-sampling issues and effects of survey fatigue. When using public devices such as on campus computers, repeat invitations may be noticed. (Later, Single Sign On (SSO) for Library Search was scheduled for implementation. Requiring authentication could also enable random sampling from the entire user base.)
Take our survey!
I designed the Typeform questionnaire to launch in a pop-up modal. This maintains context of the product being evaluated.
After the initial peak in August, the frequency of responses per month drops off quickly. The main reason for fewer responses in subsequent months is that people who had either taken the survey or dismissed the invitation would not see another invitation for 12 weeks. The academic calendar also causes seasonal variations of usage. Deakin University trimester 2 exams end by late October. Based on these factors, I expect the next surge of responses to coincide with early trimester 1 in 2018. I recommend automating survey data storage. For instance, you can now send data from a Typeform PRO account directly to Airtable.
Satisfaction scale data
Over the four months, I gathered data to establish baselines for satisfaction overall and with product-specific tasks. The satisfaction scales from extremely dissatisfied to extremely satisfied, numerically coded from 1 to 7.
Overall satisfaction scored an average of just over four, the mid-point of the scale. The general sentiment towards Library Search is neither satisfied nor dissatisfied. Analysing qualitative responses might hint at where there is room for improvement.
Monitoring satisfaction trends with product-specific tasks is critical to inform product decisions. The following tasks and their baseline satisfaction are:
access full text 4.5
research a general topic 4.6
download a journal article 5.1
look for a specific book or journal article 4.5
download an ebook 4.4
request a book 5.7
Analysing qualitative feedback
Digging in to responses of what people find frustrating has been most fruitful. Collaborating with colleagues in the Digital Library & Repositories team, I clustered similar pain points together to build themes. Analysis quickly led to actionable insights such as:
session expires before activities are complete
book reviews (rather than books themselves) were dominating results
confusing and overwhelming interface
Putting research into action
Armed with user research evidence and working with the vendor, these issues could be addressed. A session keeper app was developed specifically for Deakin to resolve the annoying timeout issue. Also, exclude reviews as a default search feature was another new enhancement designed to resolve the issue of book reviews cluttering results pages.
Want to start tracking happiness in your digital product?
Have you used intercept surveys for happiness tracking in a digital product? Tell us about it.
Would you like to start tracking happiness and learn how to optimise user experiences of your digital product? Would a database template help? Leave a comment.