Logo

The Data Daily

Four User-Centred Strategies for Designing Useful Data Visualizations

Four  User-Centred Strategies for Designing Useful Data Visualizations

User-centred design process by the Interaction Design Foundation . Photo by Kelly Sikkema on Unsplash . Doodles by author.
Four User-Centered Strategies for Designing Useful Data Visualizations
May 28 · 14 min read
A very special thank you to Velian Pandeliev , Assistant Professor at the Faculty of Information, University of Toronto, who supervised this work.
I find that user involvement in the data visualization design process tends to vary. Users might be brought in at the tail end for feedback, after the tool has already been designed and developed. Or, they may be left out entirely, with the product shaped solely by stakeholders and subject-matter experts. Where user involvement does occur, they may be asked questions like “What do you want to see?” This may be difficult for users to answer, since it prompts them to think of visuals, rather than the problems they are trying to solve, or the questions they need answers to.
Without appropriate user involvement throughout the entire design cycle, the data viz tool might answer irrelevant questions, be too complex or too simple, and misalign with workflows or decision-making processes. These issues can all put the usability, utility, and experience of using the tool at risk. Your work may go unused or misused, and an opportunity for system awareness and improvement is lost.
User-centered design (UCD) is a process that relies on understanding users to create or improve products and services. It prioritizes user involvement at every step, to ensure that design solutions align with their needs. Many digital products are created using the UCD process, especially web and mobile apps. Since data visualizations are often digital interactive tools, UCD methods can help ensure they are user-friendly and add value.
In a recent term of my Masters program, I did a literature review of around 20 academic articles to understand how UCD has been applied to data viz tools. I also wanted to identify approaches that seemed to be the most useful.
I explored the following questions:
What phases of UCD tend to be followed in the design of data visualizations?
What UCD methods and strategies were used at each phase of the process?
Which methods and strategies led to more positive results?
Note: While my readings are limited to academic literature, I want to acknowledge that many organizations, startups, consulting agencies, freelancers, volunteers, and others follow UCD. Their work is not featured here, so my findings and conclusions likely do not represent the entire data visualization space.
The phases of user-centered design
The user-centred design process. Credit: Interaction Design Foundation .
There are many circulated frameworks for UCD, which include this one from the Interaction Design Foundation . Their framework outlines four steps:
Understand the context of use. Run research activities that gain insight into who your users are and what they need. Identify their key tasks, and why, when, where, and how they are performed.
Specify user requirements. Synthesize user research into a set of needs that serve as a blueprint for design work.
Design solutions. Create low to high fidelity wireframes and prototypes based on user requirements. Share them with users, and iterate based on their feedback.
Evaluate against requirements. Test prototypes with users to assess how well designs align with their needs.
How is UCD currently applied to the design of data visualization tools?
IDEO, a leading design and consulting firm, shows the recommended effort during the design process in the graph below. The initial Inspiration phase seems to be the most involved. During this phase, designers are doing intensive research to understand user needs, and defining the problem to be solved.
Human-centred design process by IDEO. Credit: IDEO .
In the literature I reviewed, the pattern seemed to be reversed. There was lots of effort on the tail end of the process, but less at the beginning — similar to my own experiences.
It looked like this:
The observed effort in user-centred design in the literature.
Only a few articles mentioned initial user research work, as well as specifying user requirements. Where they were mentioned, methods typically included interviews , observations , and workshops. Workshops involved many activities for users to take part in, like storyboarding , sketching , and card sorting .
Sketching during the data visualization process. Credit: Giorgia Lupi .
The majority of articles discussed evaluating a high-fidelity prototype : a digital representation of the product, closely resembling and behaving like the final version. Evaluations involved multiple methods, like design chauffeurs/walkthroughs , interviews , usability tests , task analyses , and surveys .
Rationale for usability testing a product. Credit: Nielson Norman Group .
It was great to see so much work dedicated to high-fidelity prototyping and evaluation. However, like in the IDEO curve, there should be effort at the beginning of the process too. This is to research users, outline their needs, and collaboratively run through series of low-fidelity, iterative designs.
There are many benefits of doing these steps. They can help to understand:
Problems your users need to solve;
Flows of work and information among users and their peers;
How your users problem-solve and make decisions;
Limitations your users face;
The validity of any assumptions you have made about your user and the tool they need.
I was curious to see if a lower emphasis on earlier design phases had an impact on the results of evaluations. Is user testing a high-fidelity prototype enough in the realm of data visualization? What other UCD strategies might be helpful?
What UCD strategies seem to lead to better data visualization tools?
I pulled out four strategies that seemed to be key to designing usable and useful data visualization tools:
Begin a data viz project with a comprehensive exploration of users.
When researching users, couple interviews with real-world observations.
When designing, create low-fidelity, paper-based sketches, wireframes, and prototypes to share with users.
When evaluating designs, probe suggestions from users with follow-up questions to uncover unmet needs.
These are described in more detail below.
Strategy #1: Begin a data visualization project with a comprehensive exploration of users.
UCD stresses that design projects should begin with user research. In the realm of data visualization, user research can help to:
Identify which devices the tool should be compatible with;
Select appropriate and useful graph types ;
Choose relevant types of interactivity (e.g. filters, animations, sorts);
Inform the structure, flow and navigation of the tool;
Define how to apply various visual design elements like layout, hierarchy, typography, and colour.
The literature mentioned many user attributes that were explored through research. I have grouped them into four categories: functional, informational, demographic, and personal.
Possible user attributes to explore through research.
To gather this information, authors typically led one-on-one interviews and observations with samples of users.
In one article where initial user research was conducted, all participants were able to successfully complete test tasks during the evaluation of a high-fidelity prototype. Also, the majority of their feedback was very positive.
Overall, early and thorough user research tended to demonstrate:
Less design issues and errors found in evaluation phases;
Less rework required in later design phases;
Higher user satisfaction with high-fidelity prototypes.
ROI of UX. Credit: Interaction Design Foundation .
Where user research did not occur or was minimal, designers seemed to make many assumptions about their users. In evaluations of high-fidelity prototypes, designers discovered key qualities about their users, needs, and contexts. These late discoveries forced designers to rework the structure and functionality of the tool, after it had been formally developed.
Personas and Jobs-To-Be-Done
Example of a persona. Credit: Nielson Norman Group .
One article emphasized that users are not homogenous — there can be a high degree of diversity among them. Several articles found that a few types of users emerged through research.
Authors often translated different user types into profiles or personas . These differed by roles, goals, tasks, and the key questions asked of the data. In one article , they laid out how information and decisions flowed between them.
However, personas have sparked some controversy in UCD. Jobs-To-Be-Done (JTBD) is another framework that profiles users in a different way. JTBD places the focus on specifying the outcomes a user wants to achieve:
The Jobs-To-Be-Done (JTBD) Framework. Credit: jobstobedone.org
Nielsen Norman Group provides a helpful analysis of the purpose of personas vs. Jobs-To-Be-Done, and how they can complement rather than replace each other. They say that personas can still help to promote empathy with users among the design team, and prioritize multiple user types.
Key takeaways:
User research at the outset of a design process can help to identify needs, minimize assumptions, and reduce the amount of rework later.
Gather information about the user’s key tasks, questions, workflows, decision-making processes, and more to help inform the design of the data viz tool.
As users are not all the same, Jobs-to-be-done and/or personas might help to distinguish and prioritize different types of users.
Strategy #2: When researching end users, complement interviews with real-world observations.
When to interview vs. observe users. Credit: InVision .
When interviewing someone about the jobs they do and how they do them, what they say can often be different from what they do in reality. For example, users may only be able to describe their workflow under ideal conditions. Also, more subtle processes, like information flows and decision-making, can be missed altogether.
When conducting user research, the literature recommended coupling user interviews with observations.
In one article , researchers attended meetings, where they observed how work, information, and questions flowed between users. They also noted the users’ domain-specific language, so they could understand their terminology, and use it when engaging users in design activities. This multi-pronged approach was cited to be key to the high levels of user satisfaction with their prototypes.
Observing users can help uncover flows of information, decisions, and work.
One article mentioned that it is also important to conduct user tests of prototypes in a user’s real context. Evaluation methods, like usability testing, tend to be in lab-like, closed environments, which may not represent the user’s real setting. For example, a data visualization tool may actually be used by a team, in regular, weekly meetings. The use and response to data visualization tools can vary, depending on the context of use.
Key takeaway:
People have trouble articulating how they perform tasks. Observe users in their real context to better understand how they do something, and what they use.
Strategy #3: When designing, create low-fidelity, paper-based sketches, wireframes, and prototypes to share with users.
As mentioned earlier, the majority of literature focused on evaluating high-fidelity prototypes. At this point, making changes to a robust, coded tool can be time-consuming and expensive. If the design is in lower-fidelity form, like on paper, or in prototyping software like Sketch , Figma , or InVision , it’s faster and cheaper to make changes. It helps researchers explore and test hypotheses, before investing time, effort, and money in development.
Charts plugin for Sketch. Credit: Charts .
Some articles evaluated both paper-based and digital prototypes, which allowed them to compare the nature of user feedback between them.
One article found that paper-based prototypes garnered deeper and diverse discussions with users. Another mentioned that users seemed to be better able to reflect on their work practices, comment on the tool’s utility, and provide more suggestions. A third article cited that paper-based prototypes helped increase user engagement and excitement in the design process.
Paper prototyping seemed to increase the depth of discussions and reflections.
There are a few possible reasons for these findings.
For example, when revealing a data visualization in a polished, programmed form, users may accept this as a near-final version. This may be especially true if they have not seen any prior versions. They may be uncertain and hesitant to provide feedback and critique.
Also, in my experience, when users are shown high-fidelity designs, their feedback tends to focus on strong visual cues. This includes design elements like colour and shape, rather than the structure, flow, and utility of the visualizations.
In lower-fidelity prototypes (e.g. on paper, wireframes), it is clear that the design is in progress. Feedback is imperative for it to evolve to the next level. Visual cues, like colour, are minimal or conceptual. The focus is on the structure, hierarchy, and functionality of the tool. It is more approachable, and easier to markup, change, and iterate.
Iterative Sketching with Users: The Five Design Sheet (FdS) Method
The Five Design Sheet (FdS) Methodology. Credit: FdS
One article discussed a process for sketching and discussing low-fidelity designs with users, called the Five Design Sheet (FdS ) methodology. The method involves five steps, each on separate pieces of paper (hence the name). Each page is divided into separate sections. For example, the first sheet focuses on ideation, and includes the following:
Ideate: Sketch as many ideas as possible, driven by the user’s task and goal.
Filter: Remove duplicate, irrelevant, and impossible ideas.
Categorize: Combine ideas that are similar.
Combine and refine: Group ideas that complement each other. Select the top three ideas to move forward with.
Question: Ask yourself how the ideas align with the user’s task and goals. Determine if they are misleading. List their pros and cons.
Subsequent sheets help to explore alternative designs. On the fifth page, designers and users can sketch and describe the final design.
Key takeaways:
Low-fidelity wireframes and prototypes can help designers explore and test hypotheses, generate deeper reflections and discussions with users, and increase user engagement in the design process.
The Five Design Sheet (FdS) method might help to provide structure around ideating and sketching possible designs with users.
Strategy #4: When evaluating designs, probe suggestions from users with follow-up questions to uncover unmet needs.
Probing suggestions from users may help uncover unmet needs.
In feedback and evaluation sessions, users may provide many suggestions. The more users participating in these sessions, the more suggestions you receive. It can be difficult to categorize and prioritize them all.
Many articles mentioned that they incorporated all suggestions from users. However, two papers from 2015 and 2019 noted that the usability of their tool dropped after redesigns. The authors cited the cause of this to be adding too many new features based on user requests.
A quote from one of the articles illustrated this problem:
“Simply adding all functionalities requested by end users without due consideration could bring severe usability issues and impair user experience”.
User suggestions often come in the form of adding or changing functionality, graph types, or visual design elements, like colour. As users are likely not data visualization experts, these suggestions may be a hint that one of their needs is not being met in the current design. Rather than taking suggestions at face-value, designers could probe for underlying reasons, using methods like the Five Why’s technique. In this method, designers repeatedly ask users “Why?” to uncover the root cause of a problem.
Responding to suggestions from users is complex. In addition to probing, the literature prompted me to think about other aspects to consider when assessing their appropriateness and priority. They include the following:
Alignment to scope;
Relevancy to other types of users.
Key takeaways
Probing suggestions from users with methods like the Five Why’s may help to uncover unmet needs.
Analyzing suggestions for things like scope alignment, accessibility , impact, etc. may help with assessing their appropriateness and priority.
What’s next on my reading list?
While this research provided me with many insights, it also left me with a heap of questions. Here are a few that still have me scratching my chin:
A helpful discovery for me was the breadth and depth of user attributes to explore through research, specific to the context of data visualization. But, how do each of these relate to its design? Also, how do I ask the right questions, or observe relevant behaviours? How do I piece everything together to form a clear, comprehensive picture of users?
It was great to see so many diverse UCD methods applied to designing data visualization tools. But, there were still some not mentioned — like experience mapping , A/B testing , ethnographic studies , etc. I wonder if these could have value in the realm of data viz design?
UCD can be time, effort, and resource-intensive, especially in the early stages of a project. This may be in contrast to how projects are currently done, with lots of time, effort, and $$$ devoted to the tail end. How can the switch be supported? How do we build capacity within data visualization and analytics teams for this? Should there be a role(s) dedicated to UCD?
My final thought is about data visualization tools geared towards the “general public”, where no specific user is defined at all. This can make user research and outlining specifications next to impossible, and forces designers to make many assumptions about their audience. Can a target audience really be “everyone” — or are there certain users who are more likely to need this information, seek this information, and integrate it into their lives?
It seems like I have a lot more to read…
The List of Literature
Backonja, U., Haynes, S. C., & Kim, K. K. (2018). Data Visualizations to Support Health Practitioners’ Provision of Personalized Care for Patients With Cancer and Multiple Chronic Conditions: User-Centered Design Study. JMIR Human Factors, 5(4). doi: 10.2196/11826
Folter, J. D., Gokalp, H., Fursse, J., Sharma, U., & Clarke, M. (2014). Designing effective visualizations of habits data to aid clinical decision making. BMC Medical Informatics and Decision Making, 14(1). doi: 10.1186/s12911–014–0102-x
Goodwin, S., Dykes, J., Jones, S., Dillingham, I., Dove, G., Duffy, A., . . . Wood, J. (2013). Creative user-centered visualization design for energy analysts and modelers. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2516–2525. doi:10.1109/TVCG.2013.145
Grainger, S., Mao, F., & Buytaert, W. (2016). Environmental data visualisation for non-scientific contexts: Literature review and design framework. Environmental Modelling & Software, 85, 299–318. doi: 10.1016/j.envsoft.2016.09.004
Hakone, A., Harrison, L., Ottley, A., Winters, N., Gutheil, C., Han, P. K. J., & Chang, R. (2017). PROACT: Iterative Design of a Patient-Centered Visualization for Effective Prostate Cancer Health Risk Communication. IEEE Transactions on Visualization and Computer Graphics, 23(1), 601–610. doi: 10.1109/tvcg.2016.2598588
He, X., Zhang, R., Rizvi, R., Vasilakes, J., Yang, X., Guo, Y., … Bian, J. (2019). ALOHA: developing an interactive graph-based visualization for dietary supplement knowledge graph through user-centered design. BMC Medical Informatics and Decision Making, 19(S4). doi: 10.1186/s12911–019–0857–1
Interaction Design Foundation. (n.d.). What is User Centered Design? Retrieved from https://www.interaction-design.org/literature/topics/user-centered-design
Lanir, J., Kuflik, T., Sheidin, J., Yavin, N., Leiderman, K., & Segal, M. (2016). Visualizing museum visitors’ behavior: Where do they go and what do they do there? Personal and Ubiquitous Computing, 21(2), 313–326. doi: 10.1007/s00779–016–0994–9
Lloyd, D., & Dykes, J. (2011). Human-centered approaches in geovisualization design: Investigating multiple methods through a long-term case study. IEEE Transactions on Visualization and Computer Graphics, 17(12), 2498–2507. doi:10.1109/TVCG.2011.209
Mckenna, S., Staheli, D., & Meyer, M. (2015). Unlocking user-centered design methods for building cyber security visualizations. 2015 IEEE Symposium on Visualization for Cyber Security (VizSec). doi: 10.1109/vizsec.2015.7312771
Roberts, J. C., Headleand, C., & Ritsos, P. D. (2016). Sketching designs using the five design-sheet methodology. IEEE Transactions on Visualization and Computer Graphics, 22(1), 419–428. doi:10.1109/TVCG.2015.2467271
Roth, R. E., Hart, D., Mead, R., & Quinn, C. (2017). Wireframing for interactive & web-based geographic visualization: Designing the NOAA lake level viewer. Cartography and Geographic Information Science, 44(4), 338–357. doi:10.1080/15230406.2016.1171166
Roth, R., Ross, K., & Maceachren, A. (2015). User-Centered Design for Interactive Maps: A Case Study in Crime Analysis. ISPRS International Journal of Geo-Information, 4(1), 262–301. doi: 10.3390/ijgi4010262
Stephens, S. H., DeLorme, D. E., & Hagen, S. C. (2015). Evaluating the Utility and Communicative Effectiveness of an Interactive Sea-Level Rise Viewer Through Stakeholder Engagement. Journal of Business and Technical Communication, 29(3), 314–343. https://doi.org/10.1177/1050651915573963
Sutcliffe, A., de Bruijn, O., Thew, S., Buchan, I., Jarvis, P., McNaught, J., & Procter, R. (2014). Developing visualization-based decision support tools for epidemiology. Information Visualization, 13(1), 3–17. https://doi.org/10.1177/1473871612445832
Wassink, I., Kulyk, O., Dijk, B. V., Veer, G. V. D., & Vet, P. V. D. (2008). Applying a User-centered Approach to Interactive Visualisation Design. Trends in Interactive Visualization Advanced Information and Knowledge Processing, 175–199. doi: 10.1007/978–1–84800–269–2_8

Images Powered by Shutterstock