Software Development

Confusion As A Usability Defect

As a software engineer, you should always be looking for the location of the next defect or at least clues that something is wrong. The problem is that most developers have been trained to wait for feedback from the QA team. In many cases, the QA team is following a script or has been trained on how to use the system, so some clues get lost in the familiarity.
Michael Bolton has an interesting post regarding confusion as an oracle:

James peered over his glasses. “When you’re confused,” he said, “that’s a sign that there’s something confusing going on. I gave you a confusing product to test. Confusion might not be fun, but it’s a natural consequence when you’re dealing with a confusing product.” James was tacitly suggesting that Jon’s confusion cound be used as an oracle—a heuristic principle or mechanism by which we recognize a problem.

When the QA team is confused, that points to potential issues and should be used as a heuristic. The confusion could be from a few sources. First, the test script could be out of sync with the user interface in such a way that is not obvious how the tester should proceed. This can happen with any application that is evolving throughout the development lifecycle. Second, you could have someone that has not been trained on the application as the tester. This can quickly be remedied by providing the appropriate information on how to use the application. It could also point to an incomplete test script, meaning someone new cannot execute the test script without additional instruction.
Another reason the QA team could be confused is because the application is confusing. This is not a good sign and should be seen as a warning to the development team. Most likely there is a usability defect in the application. For some teams, usability defects do not exist and usability issues get added to the list of things to do. This is a big mistake when developing web applications. Almost a year ago I wrote about Krug’s usability rules:

The first two Krug rules of usability are very related:

  1. Don’t make me think – as far as is humanly possible, when I look at a web page it should be self-evident, obvious, self-explanatory.
  2. It doesn’t matter how many times I have to click, as long as each click is a mindless unambiguous choice.
I love the “don’t make me think” rule. If you design your application with this rule in mind, you will get past most usability problems. However, if you are a startup and developing for a large userbase, you may have slightly more concerns. Mark Evans wrote recently about a rule of thumb “I get it”:

But if you push aside the entrepreneurial enthusiasm, a startup’s success prospects depend on a compelling idea and, as important, the ability to quickly get potential users to say ”Yes, I get it”. This means being crystal clear what the service or product does, and the value propositions/benefits being delivered.

The product/service needs to fill a need or convince users it meets a need they didn’t know they had. Getting users on board has to be user-friendly and efficient. And the product/service has to delight.

If you are following these basic principles, your product should not be confusing. I am a big believer in making things simple for users, because it means they will enjoy (or at least not hate) using the system. This is obviously a good thing, but more importantly, users will look for other ways to use your application.
So, why should a “confusing” application have usability defects tracked? If an application is confusing, that means it is going to be difficult to use. When an application is difficult to use, people stop using it because they can not see the benefits to using the system.
Usability is also hard to define, but if you look at Krug’s rules you will see that the user should not have to find the right thing to do, it should be fairly obvious. But, what about prior to production? You don’t have typical users on the system, so how do you determine usability? One thing I have seen is that the length of the test script can be a good trigger for something being too difficult.
Let’s assume you have manual testing being completed. Ignoring the test case setup instructions, how many steps are required to test one feature? This is similar to the number of clicks heuristic, but it is the number of actions a user must take in order to complete a task. You could say the number is something like 7 but that is not entirely useful. I have used “Why” as my metric of choice. If you ask “why do I need to complete this step?” and there is no obvious or definable reason, then you likely are introducing unnecessary steps to the task. If you can eliminate these issues, your application usability should improve.
To really make your application simpler, make sure your QA team starts questioning things in the beginning of the process. Someone writes the test script, and it could be the QA tester themselves. The developer and the tester should go through the test script and ask “why” for each step. This will proactively remove any stumbling blocks in the plan and have a much more usable system after it passes QA testing.
Are there any tips or tricks you have for improving usability?
Reference: Confusion As A Usability Defect from our JCG partner Rob Diana at the Regular Geek blog.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button