7 Signs Your Insights Team Has Outgrown Surveys — and Needs Multimodal Research
Surveys have been the backbone of insights work for decades—but today’s decisions demand more than percentages and pie charts. When your stakeholders want context, when creative is on the line, and when speed matters as much as accuracy, a survey-only approach can leave you guessing at the “why” behind the “what.” This post unpacks seven clear signs your team has hit that point—and shows how multimodal research (modern qualitative blended with targeted quant) can give you richer context, faster alignment, and deliverables that stick.

Florian Hendrickx
Chief Growth Officer

Articles
Qualitative insights at the speed of your business
Conveo automates video interviews to speed up decision-making.
What We Mean by “Multimodal Research” (a.k.a. Modern Qual)
Multimodal research blends short, focused quant with rich qualitative inputs—video, audio, images, screen recordings, diary tasks, and contextual artifacts—to capture both what people do and why they do it. Instead of long static questionnaires, you give participants lightweight, human tasks (react to a concept, narrate a choice, show us your routine, record a quick story) and analyze the resulting words, tone, gestures, and behaviors alongside structured responses.
Think of it less as “a giant survey with a free‑text box” and more as “a set of small windows into real moments,” stitched together with thematic analysis and crisp storytelling. The payoff: sharper decisions, faster alignment, and deliverables your stakeholders actually remember.
With that in mind, here are the seven signs it’s time to expand beyond surveys—and what to do next.
1) You’re getting the what, but not the why
You know which concept won. You can segment the win by age, income, and DMA. But you can’t answer the inevitable follow‑ups: What triggered that preference? What confused people? What did they assume we’d do next?
Why surveys stall here: Traditional questions flatten nuance. Even well‑designed max‑diff or conjoint work encodes choice but not the causal story behind it. Open‑ends collect fragments, not context.
What to do instead:
Pair forced choice with narrated choice. Ask participants to pick, then record a 30–60 second explanation: “Talk me through how you decided.” You’ll capture motivations, trade‑offs, and language customers actually use.
Capture first‑read reactions. Use quick video or audio prompts: “Hit record the moment you see this headline.” Immediate reactions surface gut‑level resonance and confusion before rationalization sets in.
Probe expectations and assumptions. A simple “What did you expect would happen next?” reveals mental models you can design to—or break.
How to use it: In your readout, marry the stat with a story. “Concept B wins overall (+12 pts), primarily because it promises control. Listen to how people describe that feeling…” (insert a 10‑second clip or quote). This combo answers the what and the why in one breath.
2) Response rates are dropping
Panel costs are up, response rates are down, and completion quality is… not great. People are tired. Long, repetitive grids on a phone aren’t fun, and respondents aren’t shy about bailing.
Why surveys stall here: Cognitive load + low perceived value. If your study feels like work and doesn’t treat people like humans, they’ll give it the minimum viable effort—or none at all.
What to do instead:
Design for humans, not instruments. Short prompts, conversational tone, and tasks that feel purposeful (“Show us how you actually…”) beat 10‑page batteries.
Use multimodal tasks to increase engagement. A 30‑second voice note or quick screen recording is often easier (and more enjoyable) than typing 100 words on a phone.
Right‑size your sample. You don’t need 1,000 people for every question. Ten to twenty thoughtfully chosen participants can clarify direction faster than an under‑powered, over‑engineered survey.
How to use it: Track task completion quality (not just response rate). Are people giving you substantive stories? Are artifacts (photos, clips) on-topic? Those are leading indicators of insight quality—and they tend to improve with multimodal design.
3) Your open ends aren’t that open
You left a text box. You asked for details. You got… twelve words and a shrug emoji. On mobile, typing is effortful; without context, people don’t know what “good” looks like.
Why surveys stall here: Open‑ends are context‑free and feedback‑free. Participants aren’t writers. And even when you get long text, you lose tone, pacing, and nonverbal cues that carry meaning.
What to do instead:
Switch the default from typing to talking. Voice notes yield more words per minute and richer emotion. You’ll hear certainty, hesitations, and the words they emphasize.
Ask “show me” questions. “Show us the shelf where you keep it.” “Walk me through checkout and narrate what you’re thinking.” Screenshots, photos, and screen‑shares reveal friction you won’t hear in text.
Use stacked prompts. Instead of “Any other thoughts?”, chain two specifics: “What did you like most?” → “Where did you hesitate?” This structure doubles the depth without doubling time.
How to use it: In analysis, code for drivers and barriers—then illustrate each with one short clip or quote. Stakeholders remember patterns anchored by a human moment.
4) You’re making big bets with little context
Creative territories, brand platforms, innovation territories—these are high‑leverage calls. “Which concept won?” is useful, but insufficient. You need to understand resonance mechanisms so your team knows how to optimize—and what to protect.
Why surveys stall here: Binary or scalar signals compress complex reactions into a rank order. You get the vote, but not the playbook.
What to do instead:
Deconstruct resonance. Ask participants to tag the moment that landed (a phrase, an image, a promise) and narrate why it mattered. You’ll surface the transferable elements worth carrying forward.
Map confusion. Have people circle or timestamp where they got lost and explain what they expected. This doesn’t just tell you what to fix; it tells you how to fix it.
Collect language you can ship. Participants’ words beat marketing copy nine times out of ten. Fish for sticky phrases and metaphors—then test them in creative.
How to use it: Build a “Protect & Improve” slide for each finalist idea:
Protect: the 2–3 elements people loved (with verbatim clips).
Improve: the moments that tripped them (with repair ideas drawn from participant suggestions).
Your creative partners will thank you.
5) It takes weeks to go from data to decision
Fieldwork ends Friday. The deck is due… not Friday. You’re wrangling exports, cleaning text, hand‑tagging themes, and stitching quotes into slides while the window to influence the decision is closing.
Why surveys stall here: Tool sprawl and rigid workflows. The analysis unit is the “study,” the deliverable is the “deck,” and everything is batch‑processed.
What to do instead:
Work in “insight sprints.” Treat your project like a short agile loop: Day 1 scoping → Days 2–4 field → Day 5 working session → Day 7 decision. That tempo forces focus and keeps teams aligned on action, not artifacts.
Analyze while you field. Start tagging themes and clipping moments on day one. Early patterns guide late‑stage probes, improving both depth and efficiency.
Deliver narratives, not just numbers. A 2‑page memo with five clips can often replace a 30‑slide pack—and travel further through your organization.
How to use it: Set and track a metric called Decision Velocity: days from field start to documented decision. Redesign your process to cut that number in half.
6) Stakeholders want stories, not stats
You know numbers matter. Your CFO knows numbers matter. But when creative, brand, or go‑to‑market leaders gather, a single quote can tilt the room more than a 3‑point lift. Humans make sense of the world through narrative.
Why surveys stall here: Aggregates don’t stick in memory. Without a human moment to anchor the meaning, your beautiful chart is another chart.
What to do instead:
Lead with a scene. Start your readout with one vivid moment—a customer’s reaction, a live demo frustration, an in‑the‑wild usage photo—then generalize. Story → pattern → implication is more persuasive than the reverse.
Assemble a “highlight reel.” Five 10–20 second clips, each labeled with the theme it illustrates, can do more to align cross‑functional teams than pages of bullets.
Show the jobs, not just the segments. Organize insights around the progress people are trying to make (jobs‑to‑be‑done), illustrated by real quotes. It’s a more natural bridge from research to roadmap.
How to use it: Design every deliverable to be “forwardable” in 5 minutes or less—a shareable narrative email, a one‑pager, or a 90‑second clip reel that a VP can watch between meetings.
7) Your team is stuck cutting quotes from a PDF
If your evenings are spent copy/pasting, formatting subtitles, and hunting timestamps, your process isn’t just slow—it’s brittle. Manual steps introduce errors, drain energy, and push insight work toward the least creative part of the job.
Why surveys stall here: Traditional tools weren’t built for multimedia insight. They treat quotes as decoration, not data.
What to do instead:
Standardize your “insight objects.” Treat every clip, quote, and artifact as a first‑class citizen: time‑stamped, tagged, attributable, and stored where it’s reusable.
Build a living evidence library. Curate themes with a couple of anchored moments each. The next time a similar question surfaces, you’re minutes—not days—from a compelling answer.
Automate the rote, keep the judgment. Use transcription and clustering to get to first draft, then apply human taste to sharpen the story. Machines can find patterns; only you can decide which patterns matter.
How to use it: Make “reuse rate” a KPI. How many assets from this study show up in future decks? If the answer is “almost none,” you’re rebuilding the same wheel every time.
When Surveys Still Shine (and how they pair with modern qual)
This isn’t a takedown of surveys. They’re excellent at what they’re designed to do: measure incidence, size an opportunity, validate direction, quantify lift, and monitor change over time. The shift isn’t away from surveys; it’s toward a better division of labor.
Use survey‑heavy designs when you need:
Incidence and market sizing
Clean measurement of lift or preference
Tracking over time (brand, NPS, satisfaction)
Representative slices across many subgroups
Use multimodal designs when you need:
To find the language that sells
To de‑risk creative and product before you scale
To understand context, sequence, and “moments that matter”
To carry a room with evidence that feels real
A simple pairing pattern:
Explore with multimodal. Identify drivers, barriers, and language; collect clips/quotes that humanize each theme.
Quantify with a lean survey. Measure prevalence and prioritize action. Use terms people actually said.
Sell with a hybrid readout. Lead with a story; show the number; close with a recommendation and a clip that makes it stick.
A 30‑Day Pilot: How to Try Multimodal Without Breaking Your Process
You don’t need a massive reorg to get started. Run a tightly scoped pilot that proves speed, depth, and persuasion in one shot.
Week 1 — Frame the decision
Pick one high‑leverage question where nuance matters: e.g., choosing a headline/visual territory, defining a feature’s value proposition, or diagnosing funnel drop‑off.
Define success criteria with stakeholders: “We’ll make a call on X and capture 3 ‘protect’ elements + 3 ‘fix’ items we all agree on.”
Draft a lean guide (no more than 6 tasks): first impression, why/why‑not, show me, expectation vs. reality, language harvest, final advice.
Week 2 — Field (n=12–24)
Recruit intentionally. Balance extremes (super‑fans + skeptics) and include a couple of “near misses” (people who considered you but chose a competitor).
Design human‑sized tasks. 3–7 minutes total per participant; default to voice/video.
Analyze as you go. Create early buckets (drivers, barriers, language, moments) and start clipping exemplars on day one.
Week 3 — Synthesize
Build a theme × evidence grid. Each theme gets a one‑line headline, a 2–3 sentence summary, and 1–2 clips/quotes.
Draft recommendations while fresh. For each theme: “Protect” or “Improve,” with specific next steps tied to creative/product.
Pressure test with the team. Share the grid and 5‑clip highlight reel in a working session. Iterate once.
Week 4 — Decide and document
Run a 45‑minute decision meeting. Open with the reel; walk the grid; lock recommendations in the room.
Send a forwardable package. Two‑page memo + highlight reel + appendix with raw assets for deep dives.
Capture metrics. Time from field start to decision; stakeholder confidence; reuse of assets in related work.
What Good Looks Like: Outcomes to Track
Modern qual isn’t just “richer stories.” It should change the trajectory of decisions and the tempo of your team. Track these:
Decision Velocity: Days from project kickoff to decision. Target: cut baseline by 30–50%.
Confidence Lift: Stakeholders rate confidence pre/post readout. Target: +20 points.
Quote/Clip Adoption: Number of non‑research decks that embed your evidence within 30 days. Target: 5+.
Rework Avoided: Creative or product iterations reduced due to early clarity. Track qualitatively at first.
Participant Effort vs. Depth: Average time on task vs. richness of output. High depth at low effort is your signal the design is working.
Common Objections (and how to answer them)
“Our execs trust numbers.”
Great—bring numbers! Use a lean quant pulse to size the patterns you see. Lead with a clip to humanize the pattern; follow with the stat to validate it.
“We don’t have capacity.”
A 12‑person multimodal sprint with a tightly scoped guide can run in a week and replace weeks of conjecture and do‑overs. Automate the rote parts and keep the judgment.
“People won’t record video.”
Some won’t, many will—especially if tasks are short, mobile‑friendly, and purposeful. Offer voice as an alternative. You’ll still get tone and pacing.
“Legal/privacy will block this.”
Design with consent and data minimization from the start. Be explicit about what’s recorded, how it’s stored, who can access it, and for how long. Blur faces or redact identifiers for broad sharing when needed.
“This sounds expensive.”
It’s not about more money—it’s about better mix. Trade one bloated, under‑informative survey for one targeted multimodal sprint plus a lean quant follow‑up. The cycle time you save usually covers the delta.
Practical Design Tips You Can Use Tomorrow
Default to first reactions. Ask participants to respond before they think too hard. Then probe the why.
Harvest usable language. End every task with: “If you had to explain this to a friend in one sentence, what would you say?”
Anchor every theme with evidence. If you can’t back a theme with a clip/quote, it isn’t a theme yet.
Design for reuse. Store assets with tags (e.g., value: control, barrier: complexity, moment: checkout). Future you will be grateful.
Keep sessions short. Under 7 minutes total task time increases completion and quality.
Write the readout first. Draft your “Protect & Improve” slide titles before you field. Let that force ruthless focus in your guide.
A Note on Ethics and Accessibility
Richer inputs don’t excuse sloppy ethics. Get explicit consent for recording, clarify how assets may be used internally, minimize personal data, and offer alternatives for participants who are uncomfortable on camera. Make tasks accessible: captions for video prompts, clear instructions, and mobile‑first design. Treat participants like collaborators, not instruments.
Bringing It Home
If you recognize yourself in these seven signs—if you’re drowning in partial answers, chasing engagement, or cutting quotes at midnight—it’s time to broaden your toolkit. Surveys aren’t dead. They’re just one instrument in a bigger band. Multimodal research helps you hear the melody and the lyrics: the numbers that guide direction, and the human stories that drive conviction.
Start small. Pick one decision that matters this month. Run a focused multimodal sprint. Pair it with a lean quant pulse if you need scale. Deliver a narrative that opens with a moment, connects to a pattern, and closes with a concrete recommendation. Measure your decision velocity. Watch what happens to confidence in the room.
And then do it again.
Related articles.

Decisions powered by talking to real people.
Automate interviews, scale insights, and lead your organization into the next era of research.