Dan Naden

Usability Testing: Four Hours Well-spent

When you are working with an online product, it’s easy to fall into the trap of thinking you have all the answers. You have Web analytics, click data – you are awash in data from the myriad of tools you’ve artfully integrated together. Your team of sharp developers, rigorous testers and intuitive designers is second-to-none — all with brilliant insight into what should be built next. Undoubtedly, your founder or executive sponsor is a veteran in the industry. He or she has been in the throes of the market forever, experiencing changes and seeing dramatic shifts in what customers desire from your product or service. If there is ever someone who can predict the moves of a market, it’s usually the highest paid person in the room. 

Sound familiar? 

I experienced this first hand at Hoover’s Online earlier in my career. As a fairly successful ‘dot com’, the service had a strong customer base and the team crafted a well-researched roadmap. We seemed to check all the boxes for new features and functionality with an aim towards growing market share and keeping competitors at bay. 

With a new product launch on the horizon, things were status quo with the team: requirements documents were written, prototypes were being built and details of the new features/functions were being discovered by a hungry sales team eager to sell something new. Everyone was overzealous to get this new release into the hands of our customers. 

Then, out of the blue, a team member said: 

“I think we need to do some usability testing.”

The voice was our usability researcher, an underrated resource who always spoke confidently with the customer’s interest at the center of all decision-making. 

Reaction to the ‘usability’ proclamation was met with rampant, violent opposition. 

“There’s no time for that,” said the sales executive. “Let’s just get the new features out there so we can start selling.”

“We’ll do that later. We have a good sense of what the customers want. We can table the usability session for next launch,” said another company executive who always commanded the respect of many within the organization. 

The usability researcher spoke up again, pausing everyone in their tracks. 

“Why don’t we put some of these new features and functions in front of our actual customers BEFORE we launch? Based on what they share, we can refine the offering, making it even more compelling for a wider audience. This worked incredibly well at my last company. Why don’t we give it a shot?”

After much discussion, the launch date was put on hold while we recruited some customers in Dallas, Texas for a usability session. The usability researcher took the lead in defining the goals for the session, the target audience, and the expected outcomes. For most of the company, this was brand new territory, and there was still some skepticism, but all were intrigued at what we could learn. 

Over the course of two days, we showed customers and potential prospects a new version of the Hoover’s Online service. Any preconceived notions or hunches we had about how and why people used the product were shattered. In our short-sighted heads, we thought we had ‘answers’ in the building, yet we were blind to most of the story. 

Here’s what we learned: 

Within Hoover’s HQ, we thought everyone knew and understood the different subscription levels (Lite, Pro, and Pro Plus) and the features and tools included in each level. Strangely, we believed customers actively researched our subscription levels and knew what upgrading to the next level meant in terms of new functionality. Wrong. Of the 10 ‘usability interviews’ I watched, not one person knew the level of subscription they had with Hoover’s Online. 

The comments about subscription levels from customers hit us right in the mouth: 

“Oh, I don’t know about subscriptions. I just log in and it works. I think someone in purchasing picked our level for us.”

“I never think about that. I just want to get to the information I need.”

“I didn’t even know there were levels. I just thought we got access to everything.”

This feedback helped us rethink how customers purchase and consume the product. Additionally, it assisted in how we built ‘trap doors’, ‘mousetraps’ or ‘roadblocks’ for features not included in that customer’s subscription level. Customers understood why we needed to upsell certain features in other subscription levels, yet they hated running into them, classifying them as ‘a waste of time.’ One savvy user recommended that we just log the number of times a user runs into an inaccessible feature in a usage report for the admin at their company – usually the person championing any upgrades for a product. 

No matter who smart the folks in the building are — the real answers lie outside the building within the hearts and minds of your customers. Usability testing, a oft-ignored tactic to gain market sentiment and real world feedback can be an incredible game changer if your team listens and incorporates the feedback into the product. The best sessions I’ve participated in were ‘face-to-face’ sessions – and they are incredibly high-value if you can nail the ‘3Ps’ logistical details of place, participants and purpose. If budgets are a concern, there are many Web tools that allow for effective remote usability testing. 

Trust me – it never works to wing it. Consider some form of usability testing BEFORE you bring those prized features to market.

Share Button
Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: