Usability

Usability at a Glance: A Look Into Desirability Studies

When a user encounters an interface for the first time, they make split-second judgements about the interface’s usability almost immediately. Even as they navigate the interface, their initial impression colors their experience, either positively or negatively. They expect the site to be user-friendly, and the design choices often hint to whether or not their expectations will be met.

Desirability, the “image, identity, brand, and other design elements are used to evoke emotion and appreciation” is one of the seven facets of effective user-centered design. It plays a crucial role in ensuring that users enjoy their first experience with an interface and return to use it again and again.

honeycomb
Peter Morville’s User Experience Honeycomb contains the seven qualities that an interface has to have in order to be user-friendly. Notice “desirable” in the upper left-hand side.

Although this type of interaction does not require any action on the user’s part, it is still a valuable part of their overall experience. Certain design choices provide hints to the user about how user-friendly the interface will be. Although it can be challenging to quantify the emotions that users experience, Joey Benedeck and Trish Miner of Microsoft developed a tool that would do just that. A desirability study (or a Microsoft’s Product Reaction Cards Method or Microsoft Desirability Toolkit, depending on who you ask) is a usability research method that involves giving actual users a set of 188 adjectives (60% positive, 40% negative) and letting them choose the top five words that describe the interface. This offers a snapshot into the emotions of the user in a way that is more quantifiable than an interview or focus group.

523tab01
Full list of the product-reaction adjectives from the Microsoft Desirability Toolkit (image credit: Drew Fletcher and Austina De Bonte)

Admittedly, 188 adjectives is a lot of words, which could overwhelm users. However, desirability testing can be customized to fit the fit the type of study being done. For example, one desirability study conducted by Mad*Pow Media Solutions was conducted on interface comps for a new website. They decided to limit the number of adjectives to only those adjectives that pertained to the website’s brand, but they maintained the recommended 60:40 positive/negative ratio. Because the purpose of the study was to only test the users’ emotional responses to the website, the users were given only screenshots of the comps. If they had been allowed to interact with the web designs, they might have been distracted from the original purpose of the study. By limiting the users’ interactions with the comps and the types of responses that could be given, the researchers were able to get more focused insight into the perceived usability of the website.

Mad*Pow Media Solutions created two design comps for their desirability study. They wanted to create a website that was viewed as “professional,” “trustworthy,” “friendly,” and “empathetic.” However, Option 1 was viewed as “sterile” and “impersonal,” while Option 2 had results that aligned with their goals. Based on these results, the design team went with design elements from Option 2 for their Final Version.

web-results_small
Option 1, Option 2, and the Final Version of Mad*Pow Media Solution’s website design

Unlike many other more formal usability studies, desirability studies can be conducted either in person in a lab setting, or they can be distributed as an online survey. Although online surveys can’t answer why users chose the adjectives that they did, but it can be distributed to hundreds of users across multiple countries and demographics.

The question is, what do you do with all of these responses? This is where the real data analysis starts. The following are recommendations for reporting your results from the Nielsen/Norman Group, the firm that pioneered the fundamentals of user experience:

  • Report the top most-selected words (for example, ‘calm,’ ‘expensive,’ ‘innovative,’ ‘fresh,’ and ‘intimidating’).
  • Use percentages rather than raw frequencies to report the number of times each word was selected. (For example, you may report that 71% of participants selected the word ‘fresh’ to describe the design.)
  • If you have multiple user groups and can identify those in your participant responses, include them in the presentation of your results. Meaningful differences between the sets of words preferred by the two groups may give you insight into their different attitudes. (For example, you may report that 54% of experienced users described the design as ‘exciting’ while only 13% of novice users selected the same word.)
  • If you’re evaluating multiple designs or multiple versions of the same design (for example, old and new), look at the differences between the sets of words chosen to describe the different designs. (For example, you may report that 83% of the users described the redesigned app as ‘professional,’ compared with only 20% using the same word for the older version of the app.)
  • If the site is intended to communicate specific brand attributes, decide in advance what words correspond to your brand positioning. Then count how many users include at least one of those words in their top-5 list.
  • Use a Venn diagram to present how your results map to design direction words, how different designs are described differently, or how different user groups describe a design differently (see the example below).
venn-diagram-3
Example Venn diagram from Nielsen/Norman Group displaying the differences and similarities between top responses from young adults (18-25 year-old) and older adults (35+)

This data can be used to make alterations to the interface’s design to better match the desired results. This Venn diagram represents the results of a desirability study that was conducted by the Nielsen/Norman Group on the desirability of flat design among young adults and older adults. An in-depth description of this study can be found here.

Although several usability techniques should be used to test the usability of an interface, desirability studies are a relatively fast way to learn more about the elusive desirability that can determine the long-term success or failure of a product or website.

 

References:

Hawley, M. (2010, February 22). Rapid desirability testing: A case study [Web log comment]. Retrieved from http://www.uxmatters.com/mt/archives/2010/02/rapid-desirability-testing-a-case-study.php

Meyer, K. (2016, February 28). Using the Microsoft Desirability Toolkit to test visual appeal [Web log comment]. Retrieved from https://www.nngroup.com/articles/microsoft-desirability-toolkit/

Meyer, K. (2016, February 28). Young adults appreciate flat design more than their parents do [Web log comment]. Retrieved from https://www.nngroup.com/articles/young-adults-flat-design/

Meyer, K. (2016, March 14). Microsoft Desirability Toolkit product reaction words [Web log comment]. Retrieved from https://www.nngroup.com/articles/desirability-reaction-words/

Morville, P (2004, June 21). User experience design [Web log comment]. Retrieved from http://semanticstudios.com/user_experience_design/

User Experience Basics [Web log comment]. Retrieved from http://www.usability.gov/what-and-why/user-experience.html