Logo

The Data Daily

How to Efficiently Evaluate Information Visualization?

How to Efficiently Evaluate Information Visualization?

How to Efficiently Evaluate Information Visualization?
Dec 2 · 4 min read
Seven scenarios of evaluation based on goals and questions.
Evaluation in information visualization is very complex. It’s important to find out not only if the visualization is easier for users to read but also if the tool can well support the process of creating visualization. To answer the question of how to efficiently evaluate the information visualization, Heidi Lam and colleagues proposed in 2012 seven different scenarios to help people focus on the right evaluation goal and ask the right questions.
Seven Scenarios
A total of seven scenarios was proposed by the researchers after they analyzed 850 papers from the information visualization literature. The scenarios can be classified into two main categories based on their focus: process and visualization. In the process group, the main goal of the evaluation is to understand the underlying process and the roles played by visualization. In the visualization group, the goal is to test design decisions, explore a design space, benchmark against existing systems, or discover usability issues.
Scenarios based on Process
1. Understanding Environments and Work Practices (UWP)
The goal is to work towards understanding the work, analysis, or information processing practices by a given group of people with or without software in use.
What is the context of use of visualizations?
In which daily activities should the visualization tool be integrated?
What types of analyses should the visualization tool support?
What are the characteristics of the identified user group and work environments?
What data is currently used and what tasks are performed on it?
What kinds of visualizations are currently in use? How do they help to solve current tasks?
What challenges and usage barriers can we see for a visualization tool?
2. Evaluating Visual Data Analysis and Reasoning (VDAR)
Here the goal is to access a visualization tool’s ability to support visual analysis and reasoning about data.
Data exploration? How does it support processes aimed at seeking information, searching, filtering, and reading and extracting information?
Knowledge discovery? How does it support the schematization of information or the (re-)analysis of theories?
Hypothesis generation? How does it support hypothesis generation and interactive examination?
Decision making? How does it support the communication and application of analysis results?
3. Evaluating Communication through Visualization (CTV)
The goal is to understand how effectively the message is delivered and acquired to one or more persons through visualization, in contrast to targeting focused data exploration.
Do people learn better and/or faster using the visualization tool?
Is the tool helpful in explaining and communicating concepts to third parties?
How do people interact with visualizations installed in public areas? Are they used and/or useful?
Can useful information be extracted from a casual information visualization?
4. Evaluating Collaborative Data Analysis (CDA)
The goal is to understand to what extent an information visualization tool supports collaborative analysis and/or collaborative decision making processes.
Do people learn better and/or faster using the visualization tool?
Is the tool helpful in explaining and communicating concepts to third parties?
How do people interact with visualizations installed in public areas? Are they used and/or useful?
Can useful information be extracted from a casual information visualization?
Each year shows the average number of evaluations coded per scenario per venue.
Scenarios based on visualization
5. Evaluating User Performance (UP)
The goal is to objectively measure how specific features affect the performance of people with a system
What are the limits of human visual perception and cognition for specific kinds of visual encoding or interaction techniques?
How does one visualization or interaction technique compare to another as measured by human performance?
6. Evaluating User Experience (UE)
The goal is to elicit subjective feedback and opinions on a visualization tool.
What features are seen as useful? What features are missing?
How can features be reworked to improve the supported work processes?
Are there limitations of the current system which would hinder its adoption?
Is the tool understandable and can it be learned?
7. Evaluating Visualization Algorithms (VA)
The goal is to capture and measure characteristics of a visualization algorithm.
Do people learn better and/or faster using the visualization tool?
Is the tool helpful in explaining and communicating concepts to third parties?
How do people interact with visualizations installed in public areas? Are they used and/or useful?
Can useful information be extracted from a casual information visualization?
Conclusion
Instead of categorizing evaluation based on existing methods, researchers classified the evaluation into seven scenarios based on focus and questions. Practitioners in this field can reflect on evaluation goals and questions before choosing methods. The research provides new scope for people to effectively and efficiently evaluate the information visualization.
Reference
Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant, Sheelagh Carpendale. Empirical Studies in Information Visualization: Seven Scenarios. IEEE Transactions on Visualization and Computer Graphics, 18(9):1520–1536, 2012.

Images Powered by Shutterstock