Are You Baking or Cooking? Deciding How and When to Apply AI in UX Research

Light

post-banner
By Phil Heuring, SVP, Qualitative Insights at Material

 

Can AI improve the speed or cost of your UX research? This question is invariably asked at the start of most projects today. It’s also not precise enough to be useful since the answer is nearly always: yes. AI moderators can manage many users and time zones simultaneously. Well-crafted AI prompts can quickly provide useful and timely context on design best practices, market research and design expertise. So, yes, AI can improve speed and optimize cost.
A more valuable question to ask is: What can we offload to AI so it is applied strategically in our research? It’s important to assess where the cost and time savings pay off and where the risks outweigh benefits. The work entrusted to AI — and what is reserved for humans — should be filtered through your research needs and objectives.

 

Mapping Your Research Goals to AI’s Capabilities

At its core, AI spots patterns and makes a guess it considers most likely within its written instructions. This often results in regressing to the mean or following training data to a fault (consider AI’s historical sycophancy to users over accuracy or realism).
Sometimes, especially in UX research on a qualitative scale, it is the outlier that informs the outcome or sparks the breakthroughs. One user, one data point, one behavior or one probe can change the trajectory of fielding and results. A single moment can tactically change how you decide to code, or it can strategically impact how you consider product market fit.
Applying your research objectives and needs to a framework can help assess your level of risk or benefit in offloading to AI tools. This simple sorting considers how much real-time judgment and adaptation your research needs to be successful. The more of each, the higher your risk of amplifying the role of AI.

 

Is Your UX Research Recipe Closer to Baking or Cooking?

Many situations gain enormous AI benefits with negligible, if any, cost to quality. In fact, quality is often improved in relation to better consistency and less potential for errors. Perhaps the simplest analogy is whether your research is closer to baking or cooking.
The more you know your recipe and the necessary ingredients, the more set your plan and the less on-the-fly adjustments needed for success, the more AI should lead. This is the ‘baking style’ of UX research — more science than art. There are set pieces of finished prototypes, and you are able to plan what needs to be recorded and where, the placement and depths of probes and have little need for mid-fielding adjustments. Follow the recipe, set it and take it out of the oven.
Then there is the ‘cooking style’ of UX research, where you expect adaptation. Changes to the recipe as you go are expected, finding new ingredients or combinations. The more exploratory work and needing to adapt as you go requires a human to ‘taste,’ make judgments and adjust many places along the way.
Knowing which of these two basic styles your UX research program falls into will help you apply the following framework for deploying AI and maximizing its benefits without detracting from the core of your existing program.
Material+

An AI Decision Framework — Not a Formula

This way of thinking about research context can help determine how much to rely on AI tools across your UX research project.
Is your task flow realistic with a singular and clear golden path? Do you have the optimal wording and placement for any potential probes? Are there just a few KPIs with clear measurement parameters? These all sit nicely in the AI-owned, lower-left quadrant.
In contrast, are your designers looking for potential new spaces, perhaps incorporating AI itself into products that require new user language, mental models and orientation? Are norms and definitions around dimensions likely to need tweaking and discussion — or even additions? Will probing be needed in unexpected places based on user behavior or even human observation of non-behaviors or hesitation? This is squarely in a human led, upper-right quadrant.
For the lower-right and upper-left quadrants, it may make sense to delegate responsibilities or have a clear owner depending on the nuances around real-time observation and judgment described in this article. In the lower-right quadrant, for example, a consumer task may easily be timed with planned branched probes, which is great for AI. Meanwhile, if it’s a complex B2B workflow, it likely requires a moderator judgment on instruction understanding, nuanced coding dimensions, time on task and when to intervene with a user.
In the end, this analogy and overall framework are about zooming out to proactively assess project needs and let the objectives and outcomes drive AI decision making.
Want to learn more about how Material can help you assess and strategically integrate AI into your UXR program? Start the conversation today.