Mark Thompson-Kolar’s summer was a little different than most. He spent it at Cengage Learning as an intern doing usability testing. He shared his experience and what he learned from it with the Refresh Detroit audience.
Mark isn’t your typical student. He’s been a newspaper editor, designer and manager for 18 years at daily news media companies in Michigan and Indiana. He’s currently a master’s student at the University of Michigan in the School of Information studying information architecture, interface usability and content strategy.
Starting in May, Mark was part of a team consisting of one other intern and a User Experience Manager. The team’s focus was to always provide actionable and practical recommendations on how to improve products. Gale, a division of Cengage Learning, creates and maintains over 600 databases that are published online. The number one point on Gale’s mission statement is “Focus on the End-User”.
It was also clear that part of the team’s purpose was to further build the usability culture within the company. So the three of them tried to be everywhere that development was happening. A large part of what the team did was remote usability studies and heuristic evaluations.
The heuristic evaluations consisted of critiquing each product and applying what they knew about usability and web best practices to make recommendations. The approach is typically done early in the development cycle as an inexpensive way to knock down problems before launch.
In this case, the team members would spend 3-4 hours examining a product that was already available to the public. They’d identify a problem, show which of Neilsen’s 10 General Principles it violated and tried to cite a solution for it. As Mark became more familiar with Gale’s products he found he could point out features in different products that could help with the one he was evaluating. He emphasized that its important the recommendations don’t come across as a rip document. Their tone shouldn’t set the development team against usability team.
While the heuristic evaluations were illuminating, the real emphasis and “real fun” for the team was in the weekly usability studies. Over 14 weeks they performed nearly two dozen remote user tests on a variety of the company’s products. The team basically followed Steve Krug’s recommendations from his Rocket Surgery Made Easy book.
Each Intern did a test per week. They’d set the goals for the test with the stakeholders. They’d write a script based on the goals that consisted of 5-7 tasks for the subjects to complete. The members of web design team, development team, marketing team, content managers, executives, sales and any other stakeholder was invited to observe the test. Sometimes they had up to 12 people observing a testing session.
“I found [Usability Testing] a lot of fun,” said Mark. “If I could do this full time all the time I’d do it…but trying to find users ate up a ton of time.” They’d recruit subjects that were generally reflective of the customers for the products. Mark found the friends of friends feature of Facebook a great tool. Former co-workers of the members of the team were also a good source. To help with recruitment, subjects were offered a $25 Amazon gift card but Mark thinks a lot of the subjects would have participated for free. The card was meant to be an incentive but not so much that it would some how bias the results. Sending reminders and arranging for backup subjects as also important.
The team used Cisco’s WebEx for the testing. It allowed the subject to be at home, at work, or almost anywhere during the testing. WebEx also worked well because it:
- Allowed the subject to take control of the moderator’s computer to access the products.
- The screen could be projected so all those observing could see what was happening.
- Observers could also be in other cities and countries watching and listening to the test.
- The voice conversation between the moderator and subject could be recorded.
They’d do three sessions in the morning. The moderator would guide each subject through the tasks while the observers took notes. During sessions it was important not to bias the subject by answering their questions. This wasn’t always easy because both the observers and the moderators knew the products and its natural to want to help.
After the sessions the usability team and the stakeholders present would immediately go into a larger room for a debrief that lasted about an hour. The rule was that anyone who had observed a testing session that morning could contribute up to three observed issues. The goal was to identify the top 10 issues and make actionable recommendations. The easiest possible problems to fix were given preference. “As it went forward the word spread that what we were doing was actually working, that we were coming up with very practical, workable, actionable bullet points,” said Mark.
There are potential objections to doing remote usability testing this way. For example, the studies aren’t statistically significant. Mark points out that while that’s true it’s better than no testing at all. Another objection is that the subjects aren’t real customers. Mark’s reply to that is “the emphasis was on the usability, not so much on the content”. It wasn’t a marketing study where you’re trying to persuade the user to buy a product. What the test was trying to do was determine if the subject could use the product. An additional objection is that it’s a waste of time because who ever controls the project budget is going to control development as they see fit. Even so, at least the product manager or development lead has a list of what was uncovered by the testing which gives them the ability to make decisions based on real information.
Usability is where ever the employees who care about it are. Mark noticed that there were a few people in each development area that were extremely interested in what the usability team was doing and participated. From these fans of usability the culture spread.
Mark said his experiences this summer really drilled home the truth in the statement that “the users are not us”. They have problems where those who develop and work with the products wouldn’t expect them.
There were numerous questions and comments both during and after Mark’s excellent presentation. He clearly engaged the Refresh Detroit audience. His experienced showed that usability testing, especially remote user testing, can be fun and provide extremely useful recommendations on how a company can improve its software and web products.
Slides from Mark’s presentation are available at Slideshare