GEOG 583
Geospatial System Analysis and Design

Evaluating Geospatial Designs

Evaluating Geospatial Designs

A project design incorporates evaluation frequently, leading to iterative updates and changes to the design. Throughout the process, you have implemented the needs assessment and the cognitive walkthrough for evaluation. This week, you are incorporating the last type of evaluation: usability testing and heuristic evaluations. Therefore, you can see that evaluation is iterative and continues throughout the entire design process.

Flowchart showing evaluation stages: Needs Assessment, Concept Development, Prototype, Cognitive Walkthrough, Implementation, Usability Testing and Heuristic Evaluations.
Figure 1: Sequence of steps for designing and evaluating a geospatial design.
Credit: Brandi Gaertner © Penn State is licensed under CC BY-NC-SA 4.0

What is Evaluation?

Evaluation is typically categorized into two broad areas: formative evaluation and summative evaluation. Formative evaluations focus on developing and refining designs. Summative evaluations compare an implemented system to an alternative system with the goal of measuring differences in performance or user satisfaction between the two systems. Quite often, formative evaluations happen in the early/middle stages of a design exercise and summative evaluations take place toward the end when a system has been implemented.

Common methods used in both types of evaluation include:

  • Heuristic examinations - measure user responses to the system based on a set of common system design criteria
  • Surveys - use open or closed-ended questions to identify system needs or areas for improvement
  • Focus groups - involve group discussion of design options, user experiences, or other topics to inform or critique a design
  • Interviews - make use of one-on-one questioning with users or customers to explore design options or gather feedback on tools
  • Card-sorting - is an activity in which users organize system tools/functions using paper cards to suggest interface organization
  • Expert evaluation - has system design and usability experts critique designs, prototypes, or final systems
  • Field & Tabletop Exercises - put the system through a "test run" using realistic data, scenarios, and tasks
  • Cost-Benefit Analysis - uses metrics to measure the costs of developing/using a system versus the benefits associated with its products

Formal vs. Informal Evaluation

A distinction used quite often is to characterize evaluation efforts as formal or informal depending on the degree to which the evaluation activity makes use of rigorous methods to ensure unbiased participants, sound methodology, and careful analysis of results. An informal evaluation might make use of a few of your coworkers to look over a prototype design, while a formal evaluation could involve a dozen real end-users who complete a realistic exercise using the new GISystem and complete a post-activity interview and survey to gather structured and unstructured feedback.