by Shruthy Sreepathy
I work as UX researcher and designer for ARTECA project at ArtSciLab. My primary responsibility is to lead usability testing for the ARTECA website, analyze and report test results, and implement the changes.
What is usability testing and why do we do it? Usability testing is an observational research method. It gives qualitative user inputs, and lets designers converse with and observe users. Observing participant emotion and reaction gives a lot of insight and helps understand user expectation. This tells us about the areas of improvement in the website design and instills ideas for new feature implementation.
We follow a simple and low cost usability test process described by Steve Krug in his book Rocket Surgery Made Easy. Typically, there are three people involved in a usability test:
Participant – the test user who fall in our target user audience
Proctor – the person who drives the test session and interacts with the participant
Observer – a silent spectator who makes note of his observations of the test, such as participants’ reactions or task completion or failure.
Before the test session begins, we ask participants which operating system and web browser they are comfortable with. This is important because when the test environment is not distracting participants, we can safely assume that any problem they encounter during the test is due to the website.
Joel Ewing or myself proctor the test sessions. During the test session, I take participants’ consent to record the entire session for further analysis and give them all the necessary instructions. Once they felt comfortable with the setup, I give them few tasks to complete and along the process, I listen to their comments, ask follow up questions and watch how they complete the task. Observer takes notes of the observation he makes – things like participants’ emotions, behavior, or where they struggle or encounter problems while completing tasks.
Sometimes proctoring is challenging. At times participants tend to talk less or go quiet for a length of time and I politely ask them what they are thinking or plan to do further. Sometimes they tend to deviate from the task and start suggesting or talking about different things. During such times, I politely direct them to the task but don’t discourage them from thinking out loud. Sometimes they fail to complete task and tend to blame themselves. It is important to remind the participants that we are testing the website and it is not their fault.
At the end of each test we analyze the test results, write a report and provide suggestions for improvement. For example, one of the problems we found participants face was with search feature. A common pattern we observed was that when participants did not understand the search results, they mostly lost confidence with the search feature and quit searching. Few participants tried advanced search and found it to be overwhelming. When users did not find ARTECA branding on search page, they wondered if they are still searching ARTECA content or not. Observing their emotions and listening to their comments tells their true feelings about the feature while they are using it.
We have so far tested five individuals: a librarian, two undergraduate students, a graduate student and a PhD student. Every time we ran usability test we discovered something new. Once we gathered a good number of test results, I suggested we do affinity diagramming activity to find patterns from the usability test results. Affinity diagramming allowed us to see that all of our test participants had problem with search feature. These data helped us to point out and convince the team that search feature should be improved. Sounds pretty cool, isn’t it?
Shruthy Sreepathy is a Master’s student in Human-Computer Interaction through the school of Brain and Behavioral Sciences. She graduates in May 2018.