Usability Evaluation Concept Essay

Pages: 18 (5036 words)  ·  Bibliography Sources: 20  ·  Level: Master's  ·  Topic: Education - Computers  ·  Buy This Paper

SAMPLE EXCERPT:

[. . .] These are discussed immediately below.

Issues in Usability Evaluation

The first idea of a tool for Heuristic Evaluation looked like a combination of a logging tool to keep track of usability problem, and a system that guides evaluators throughout the entire process from entering usability problems to generating problem reports. However, this was not enough. Other ways to support Heuristic Evaluation in inspection needed to be proposed. This was the challenge.

Cox (1998) studied the usability problem aggregation process in Heuristic Evaluation in depth and developed groupware based on his findings. Similarly, the Heuristic Evaluation inspection process was studied in depth and a tool was development based on findings. Once there was a better understanding of the inspection process, the process was characterized, software tool requirements were identified, and a tool for inspection based on those requirements was developed.

Heuristic Evaluation Dimensions

Heuristics are general usability principles that "seem to describe common properties of usable interfaces (Nielsen 2005a)." Nielsen and Molich (1990) initially proposed nine heuristics, which were defined based on their experience of common problem areas in interfaces and consideration of guidelines. The results of a factor analysis of 249 usability problems (Nielsen 1994b) lead to 10 heuristics (Table 2). These are commonly used to evaluate interfaces in general. Instone (1997), for example, explained Nielsen's 10 heuristics for the Web, emphasizing more on navigational aspects.

Table 2-Nielsen's Ten Usability Heuristics [Nielsen 1994b, 2005b]

1. Visibility of system status

2. Match between system and the real world

3. User control and freedom

4. Consistency and standards

5. Error prevention

6. Recognition rather than recall

7. Flexibility and efficiency of use

8. Aesthetic and minimalist design

9. Help users recognize, diagnose, and recover from errors

10. Help and documentation

Some alternatives have been proposed for specific domains to provide evaluators with domain knowledge they can use in evaluations. For instance, Dykstra (1993) developed calendar-specific heuristics based on results of user testing different commercial calendar systems. It was found that evaluators performed better when using calendar-specific heuristics. More usability problems were found by evaluators and more were severe than those performing a standard Heuristic Evaluation. Notice, however, that Dykstra's proposed heuristics had sub-headings. Dykstra's 9 heuristics had an average of 6.6 sub-headings describing a high-level heuristic, including a heuristic with 19 sub-headings. This may appear to be more like a Guideline Review with 60 guidelines than a Heuristic Evaluation with 9 high-level heuristics.

Nielsen recommends keeping the list short (about 10) for easy remembering (Nielsen and Molich 1990) (p. 249), although some may be added if they are domain specific (Nielsen 2005a). Muller et al. (1998) reformatted the list and added four more heuristics for his participatory approach to Heuristic Evaluation. In their approach they call for the participation of "work-domain experts" (users) to evaluate the targeted interface and added heuristics about human goals and experience.

The role of heuristics is not quite established. Heuristics are meant to help evaluators identify usability problems (Nielsen 2005a). However, it is not clear that heuristics support the discovery and analysis of usability problems (Cockton and Woolrych 2001; Cockton et al. 2003). In usability problem analysis, heuristics as analysis resource have not proven to be effective in eliminating false alarms and confirming actual usability problems (Cockton and Woolrych 2001).

Evaluators should not only report likes and dislikes, but they should explain problems with reference to violated heuristics or other usability principles or guidelines (Nielsen 2005a). Cockton and Woolrych's (Cockton and Woolrych 2001) extended usability problem format (introduced in Woolrych 2001), for example, require evaluators to "hypothesize likely difficulties in context, rather than to just focus on problem features." The extended format encouraged evaluators to be more "reflective and less likely to propose problems with little justification (Cockton and Woolrych 200, p.175)." In fact, in an updated version of the form (Cockton et al. 2003) an entry for providing evidence of heuristic non-conformance was added, encouraging evaluators to reflect on their choose for violated heuristics.

Solutions to fix problems can be suggested based on violated heuristics (Nielsen 2005a) or some other taxonomy such as the User Action Framework (Andre. 2000) for classifying usability problems based on Norman's seven-stage theory of action (Norman 2002, pp. 45-53).

The Evaluator

Typically 5 (Nielsen 1992; Bevan et al. 2003) to 8 (Nielsen and Landauer 1993) evaluators are used in Heuristic Evaluation (although the number is still in debate (Bevan. 2003).

Novice evaluators seem to perform poorly in Heuristic Evaluation (Nielsen 1992; Jeffries et al. 199); Desurvire et al. 1992]. Evaluator performance is attributed in part to inexperience with usability and application domain arenas. Nielsen (1992) classifies evaluators as "novice," "regular specialists" (those with usability expertise), and "double specialists" (those with both usability and application domain expertise). In his study regular specialists found 75% of the problems when aggregating individual problem lists. To achieve the same success rate, it was required fourteen novice evaluators.

Users can become part of the evaluation force. Muller et al. (1998) incorporated users to take into account user's work-domain expertise in evaluations.

User Interfaces

The user interface format (paper vs. computer based) and interactivity (simulated or supported, may influence the way user interfaces are evaluated. Nielsen (1990) found that evaluating paper and computer mockups may influence the types of usability problems that are found. The author of this report argues that "physical" characteristics of user interfaces have an effect on how they can be used and evaluated. When evaluating interactive interfaces, for example, evaluators interact with the interface, entering information, going from one screen to another, trying functionality, and so on. This at the same time enables evaluators to experience problems directly and, hence, providing a way for identifying problems.

Another aspect of user interfaces that may affect how interfaces are evaluated is its complexity. Slavkovic and Cross (1999) performed some initial studies on more elaborated and complex interfaces than those in the initial work of Heuristic Evaluation (Nielsen and Molich 1990). Their results indicated that novice evaluators tend to focus on certain parts of the (Palm Pilot) user interface.

Usability Problem Formats

Evaluator's performance may be impacted by usability problem formats used to capture problem details in evaluation sessions. Cockton and collaborators (Cockton et al. 2003) designed an extended form and found unexpected improvement on evaluator's performance compared with a previous study (Woolrych 2001; Cockton and Woolrych 2001). Results showed a 19% reduction on the number of false alarms and a 26% increase on appropriateness of heuristic application when using the extended form.

Heuristic Evaluation is known to produce not only a large number of problems (Jeffries, 1991; Bailey, 1992; Tan, 2009), but also a large number of false alarms (Bailey, 1992). False alarms are identified problems that are not actual problems in the interface. A major risk of having a large number of false alarms is making changes to an interface design based on them. Hence, we want to keep false alarms to a minimum.

Heuristic Evaluation Process

The Heuristic Evaluation process can be separated in three major phases: An inspection phase, in which evaluators independently evaluate the user interface; a preparation phase where evaluators independently prepare their list of identified problems for aggregation; and an aggregation phase, in which evaluators together collaborate to generate a single report of usability problems. Figure 2 shows Heuristic Evaluation phases and activities.

Figure 2-Heuristic Evaluation Phases

Inspection Phase

Several activities can be depicted in this phase. Evaluators are involved in exploring the interface, identifying usability problems, and elaborating problems. Nielsen (2005a) recommends exploring the interface at least twice. A first pass is to get a general idea of the interface. A second pass is to analyze individual interface elements in context.

Exploration is dependent of the interface format. The format defines affordances (i.e. Characteristics objects have that determine how they can be used (Norman 2002) that allows particular ways of exploration. For example, several paper screenshots can be compared at once by positioning them side by side. Computer mockups (Nielsen 1990), on the other hand, allow exploring the interface via interaction and experiencing situations (e.g. Feeling entrapped and not being able to exit to the "main system" (Nielsen 1990).

Problem search influence how interfaces are explored. Cockton et al. (2003) introduced four (4) discovery methods: a) System Scanning: it consists in examining the interface without following any particular approach; b) System Searching: it involves some kind of strategy such as focusing in certain interface elements; c) Goal Playing: it consists in setting up goal and trying to achieve it; and d) Method Following: is similar to Goal Playing, but a step-by-step procedure is established and executed. These can be used in deciding how to approach problem search while illustrating different ways of exploration. Work needs to be done to look deeper into exploration patterns in terms of discovery methods.

Identifying Usability Problems

There are other factors than interface format and search strategies that may induce evaluators to notice potential problems. Inspection guidelines (Mack… [END OF PREVIEW]

Four Different Ordering Options:

?
Which Option Should I Choose?

1.  Buy the full, 18-page paper:  $24.68

or

2.  Buy & remove for 30 days:  $38.47

or

3.  Access all 175,000+ papers:  $41.97/mo

(Already a member?  Click to download the paper!)

or

4.  Let us write a NEW paper for you!

Ask Us to Write a New Paper
Most popular!

Australian E-Commerce Website Evaluation Term Paper


Human Machine Interface E-Iatrogenesis Capstone Project


Collection and Evaluation of Website Term Paper


Health Literacy Essay


Web Designing a Web Site Term Paper


View 87 other related papers  >>

Cite This Essay:

APA Format

Usability Evaluation Concept.  (2011, October 9).  Retrieved February 20, 2019, from https://www.essaytown.com/subjects/paper/usability-evaluation-concept/1866851

MLA Format

"Usability Evaluation Concept."  9 October 2011.  Web.  20 February 2019. <https://www.essaytown.com/subjects/paper/usability-evaluation-concept/1866851>.

Chicago Format

"Usability Evaluation Concept."  Essaytown.com.  October 9, 2011.  Accessed February 20, 2019.
https://www.essaytown.com/subjects/paper/usability-evaluation-concept/1866851.