Human-computer interaction is the study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings. This interaction between people and computers is constantly being studied and evaluated to better improve the way that the technologies are created so they are more user-friendly. There are many methods used to measure the quality of a particular interface. The most common are focus groups, think-aloud protocol, surveys/questionnaires, and usability tests. Another useful method of evaluation, and the focus of this post is utilizing feature checklists.
Unlike many of the other evaluation methods that test whether or not features work well and easily, feature checklists are used to determine if certain features are used at all. A feature that is created and implemented may pass all the tests and work great, but that does not necessarily mean that it will ever get used by typical users. Feature checklists work to figure out what features are not getting used and then the reasons why they are not being used.
Feature checklists, per the name, are simply checklists that contain every feature of a system. Along with the features are response categories that ask the user about the usage, knowledge, need, and source information of the features. These response categories prompt the user for specific information concerning the system features. The following is a list of what questions should be asked for each category:
- Usage: The main issue to pay attention to is to ask people to give specific quantitative estimates. E.g. “How many times did you perform a Save each hour during the last day when you were word processing?” Not: “Do you save, infrequently, sometimes, often…”. Remember that user’s memories are fallible, so try not to leave too long between performance and filling in a checklist.
- Check for knowledge: For each command — ask if user: suspects/expects that such a command exists, if they know it exists, if they have ever used it. An issue here is whether to describe the function or name the command or both.
- Check for need: You can ask whether they ever need this function, and current frequency of need, and their view of how this corresponds to actual use e.g. “On what proportion of the times when it would be useful did you invoke it?”
- Check for sources of information: You can also ask them to name the people they are most likely to chat with about, or comment on, commands and features of this interface. See if you can identify social clumps of usage/knowledge by correlating knowledge as measured by the checklist with these links between names.
Feature checklists may need to be written several times before a final one is decided on, but this process is very cheap compared to other types of evaluations. Once the final document has been created, the checklists are typically distributed to users who have fairly advanced experience with the system being tested. These users would be familiar with the system’s features names and uses. The checklists are easy to understand and very quick for a user to fill out considering they are just checking a box.
The data collected from the checklists will provide information on how many features there are, how many are known by users, how many are used, and how may are needed. With this data it can be concluded whether a not a system has too many commands that are useless and just distracting, or if there are features that are integral to a system, but are not being discovered by a user because of some flaw in the system.
Overall, these checklists are simple and cheap to create and distribute but offer a plethora of quantitative data that directly interacts with and serves real users.