A top-tasks usability test of FindHelp with 5 search intermediaries who connect people in need with resources in rural and urban settings.
Role: UX Researcher
Context: Class project, independent work. 14 week duration (Spring 2021)
In this pilot study, I tested FindHelp with five people who acted as search intermediaries as a major part of their job. The study was designed to address the following research questions about FindHelp’s search engine results page (SERP) and search intermediaries’ experience with the search engine as a whole:
RQ1. Are FindHelp’s search engine results trustworthy to the search intermediaries?
RQ2. How do search intermediaries evaluate FindHelp for quality of information?
RQ3. Do search intermediaries believe that FindHelp would make routine search tasks easier for them?
RQ4. How much do they trust that the directory results would meet the client’s needs?
Test Design. This study tested a single search system with a within-subject design. The independent session was scheduled in the morning (between 8:00 am and 11:00 am CST) and the afternoon interview was scheduled at least 3 and up to 6 hours later, to imitate everyday work context of these questions. Users were required to have at least some domain expertise as search intermediaries in a social work context but were not required to have any system experience with FindHelp. All questions were simulated work tasks.
The afternoon interview had three parts: first, verifying that the work tasks were routine for the participant, asking for their default tool in answering these types of questions, and addressing any questions about the test that had come up; second, re-watching the participants’ screen recording together in simulated recall, and probing the participant to explain some behaviors or clarify what they were looking for and if they found it. This portion had the participant rate FindHelp's search engine results page on a scale of 1-5 for how easy it was to begin the search, how easy it was to choose the most appropriate result, and whether they preferred to use FindHelp or their default method if they had to do it again. In the third part, I replicated their exact search for Task 2 on a live version of FindHelp, and asked them questions about the user interface, missing information, and trustworthiness of FindHelp.
There was an easy task, a medium task and a difficult task.
3. Data Analysis
I analyzed the data both qualitatively and quantitatively. Metrics collected were: satisfaction rate with the system (linear scale), time on task, and a click-by-click transcript of their search path.
Qualitatively, I was interested in ease of starting the search, how easy it was to select the results they would share with clients, which pieces of information they were using on the page to assess the credibility of the result, if there were any points of frustration or skepticism, and if they preferred FindHelp or their usual search engine for this task.
The pilot test generated new questions about search intermediaries’ information behavior, beyond this specific search engine. Some participants shared how they, as search intermediaries, felt when these information needs came up, and how they as search intermediaries gathered feedback about these types of services that they were recommending, and search intermediaries’ reasoning about their own reputation with the clients or the reputation of their organization with regard to giving reliable information in these areas.
I learned that participants can enjoy FindHelp and find it useful in a user test, but that there was something more that a search engine must accomplish before it becomes the default method. I am curious as to what that is. I hypothesize that it is related to usability or user interfaces, but that there are also dynamics of trust, reputation (outside organization and individual search intermediaries), and change or expectation of change that people are weighing.
5. Additional Work
Please reach out to me if you would like more information about the other portions of this study, which included: recruiting and qualifying participants, a heuristic evaluation of FindHelp, writing a moderator script, getting buy in from stakeholders, creating relevant tasks, moderating sessions, and analyzing/presenting my conclusions.