Recently, the UX world has seen a lot of buzz about new design tools and processes that use artificial intelligence and machine learning. Articles like these frame AI and ML as game-changers that can augment — or even replace — major aspects of the design process.:
If, like me, you grew up inhaling sci-fi utopias (and dystopias) in which human work is made obsolete by computers you might also be both excited and concerned about the rising impact of these technologies on the profession.
In this age of accelerating digital transformation, it’s important that we look at what such changes could mean for the practice of human-centered design. Beyond the fundamentals of divergence, convergence, and iteration, I believe that there are aspects of human-centered design that will remain relevant whatever the future of our technologies.
But first, let me share a story.
On a recent project for a large enterprise client, I worked with stakeholders from Salesforce, the client, and an integration firm. The day before a big meeting, the program architect showed me his presentation, which showcased the technological framework he’d used successfully in other engagements. The slide explaining his (meta)data-driven approach was titled “Say goodbye to your UX/UI design team.”
As you might imagine, I was a bit thrown off. Was this a joke? A test? What if he’s really showing it to the client?” After all, I had no plans to say goodbye to this project, let alone my profession. I decided to treat the situation as an opportunity to educate. I explained the qualitative nature of research insights and the importance of the human factor in any project. I made an impassioned case for the business value of design.
Whether or not this guy actually showed the slide to the client I don’t know for sure, but the project did go well, and the customer was pleased with the outcome. Even so, the experience made two things clear:
With this in mind, let’s dive into three fundamentals of human-centered design in today’s tech-driven landscape — understanding, orchestration, and abstraction.
Explanations of human-centered design and design thinking often use Venn diagrams to illustrate the importance of human insights (desirability), business goals (viability), and technology (feasibility) in innovation. All this is well and good, but to do valuable and relevant work, we must also factor in the environmental, social, and cultural contexts of our actions — especially given the current pandemic, rising social inequality, and the unprecedented challenges of climate change. In short, we must value the contexts we operate in, as designers and as humans.
Whether we’re assessing overarching trends or dealing with customers, users, and other stakeholders, to understand we need to ask the right questions — and to empathize with others’ behavior, emotions, relationships, motivations, and needs. Without that understanding, we can’t work well with others, let alone discuss, argue, brainstorm, joke, laugh, co-create, prioritize, or make informed decisions.
It’s no surprise that AI and ML struggle to model this level of understanding? Defining the boundaries of a project, or setting constraints for a design solution, requires that we understand people, alone and in groups — their relationships, motivations, needs, and interactions across a wide and shifting range of contexts. Even with a complex custom algorithm, it’s hard enough to collect and measure the necessary qualitative data, let alone understand the nuances of the humans involved.
When we might have tools that can truly meet these challenges is an interesting question. Even more interesting: whether a machine could ever truly understand the full context of human relationships.
Understanding context and people is crucial to human-centered design, and designers have multiple tools and methods to support and orchestrate the goal of understanding the context and the people affected.
These design tools fall into three categories: divergence, convergence, and iteration. They take the form of research, synthesis, co-creation workshops, testing sessions, and meetings. As designers, we orchestrate not only tools and methods, but also groups of users and stakeholders. Our tools and methods evolve with time or are replaced with better options. And the age of AI and ML offers many exciting opportunities to update our toolbox.
Human beings change far less quickly. Our lives have been repeatedly transformed by technological and cultural change, but when it comes to emotions and fundamental needs, we rely on the same old brains.
And that’s why orchestration is so hard to “solve” with AI and ML. Lacking true general artificial intelligence, the saying “the business is done by the people” holds true. That includes “designerly” activities that involve (and will continue to involve) humans, including qualitative research, synthesis (more on this below), co-creation, workshop facilitation, and meetings.
In human-centered design methodology, as in most strategic innovation processes, we regularly (re)frame questions or problems, then select the most relevant solutions. This helps reduce complexity and develop novel ways of dealing with specific challenges.
Abstraction is a core aspect of innovation. With it, we “translate” the specific into the general and back again — addressing general challenges with general solutions, which in turn are broken down into specific solutions. Take, for instance, the specific problem of a messy, inconsistent UI on a given web page. We can abstract the issue into overall inconsistency due to a lack of standards and guidelines. A general solution might be to build a comprehensive design system, and the specific solution to apply that system to the layout in question, then to all others.
Similarly, in a human-centered design process, we first abstract our understanding of the design brief, then focus on contexts, relevant actors, and their relationships. We abstract insights, synthesizing qualitative, and quantitative research findings to create a point of view. And toward the end of a project, we filter down the most relevant solutions to build and test. And it is this abstraction that enables decision-making.
So why is an abstraction so difficult for AI and ML? Isn’t it exactly what makes these technologies so powerful and useful? Sifting through massive amounts of data, rapidly analyzing and visualizing according to defined criteria should make effective decision-making quick and easy.
But it’s not so simple. Algorithms can scan data and present it through information design, supporting decisions. But to the human decision-makers, the workings of the algorithm are, more often than not, entirely opaque. How was the data collected? How was it analyzed, using which criteria? What effects might the information design and UI have on the decision?
Algorithms are designed by humans, and as such, they often inherit all-too-human bias — which can be a matter of life and death, as in medical or military contexts. That makes it difficult, or even dangerous, to “outsource” abstraction to AI and ML, whose algorithms are unable to care about how well a solution or idea addresses (un)met user needs. — a capability we don’t know will ever be possible.
Understanding, orchestration, and abstraction remain critical components of effective, powerful design. In thinking about what makes these factors such challenges for AI and ML, it can help to remember that these technologies are still base tools that, like hammers and screwdrivers, extend human capability. They open the door to new possibilities, new experiences, new forms of empowerment, but they also have clear limitations. They don’t understand the data they process or the context of the actors and their relationships. They cannot orchestrate people or abstract insights. And they certainly can’t understand the ethical or moral implications of a proposed decision.
The territory where these tools meet design has many regions still to explore — and plenty of room for human-centered design to prevail.
Thanks to Herve Mischler and Tim Sheiner for feedback and collaboration on this article.
Sven Schelwachgrew up recording radio shows on music cassettes (yes, that used to be a thing). He started his career at IBM, spending nine years in North America and East Asia, and teaching Design for Sustainability at Hongik University. He returned to Germany in 2017, and now works as an experience architect at Salesforce.