How to present qualitative research findings to executives who only trust numbers

How to make qualitative research findings land with sceptical executives: structuring findings, using frequency language, and building the credibility trail that numbers-first audiences need.

Cover Image for How to present qualitative research findings to executives who only trust numbers
Share this article:

In short: To present qualitative findings to sceptical executives: lead with a quantified summary ("8 of 12 customers mentioned..."), show direct quotes as evidence, connect themes to business metrics, and organise findings from key conclusion to supporting detail. Skimle's export tools generate structured summaries with full quote-level traceability, giving decision-makers the audit trail they need to trust qualitative conclusions — without having to read the transcripts themselves.

Most qualitative researchers in business settings have experienced this: you spend three weeks conducting and analysing interviews, distil the findings into a clear, well-supported narrative, present it to leadership — and the first question is "but how many people said that?" or "can we quantify this?" It is frustrating. But the executives asking those questions are not being unreasonable. They are applying the evaluation criteria they use for every other decision input, and finding that qualitative research does not meet them.

The solution is not to defend qualitative methods in theory. It is to present qualitative findings in a way that meets the legitimate credibility requirements of a data-literate audience, without misrepresenting what the data can and cannot show.

Understanding the scepticism

Executive scepticism about qualitative data usually comes from one of three places:

Sample size anxiety. "You spoke to 15 people — how do you know that's representative?" This reflects a valid methodological concern applied incorrectly. Sample size logic from quantitative research (statistical power, margin of error) does not transfer to qualitative research. But the underlying concern — are these findings generalisable? — is legitimate and deserves a real answer.

Selection bias concern. "Were these customers you selected? Could they all be fans?" This is a reasonable question. If your sampling process was not rigorous, the concern is valid. If it was, you need to explain it.

Anecdote pattern. Executives have been burned by qualitative findings that turned out to be unrepresentative — one memorable customer quote that got elevated to a strategic insight and proved misleading. Their scepticism is learned behaviour.

None of these concerns requires you to concede that qualitative research is unreliable. They require you to both ensure you have the right sample size to start with, and then to address the legitimate version of each concern directly, using the evidence you have.

Structure findings to lead with conclusions, not process

The most common presentational mistake in qualitative research is leading with methodology rather than findings. A 20-minute presentation that spends the first 8 minutes explaining how you recruited participants, what the interview guide covered, and how you conducted thematic analysis loses the audience before the findings begin.

The structure that works:

  1. The finding — a clear, direct statement of what the research shows (30 seconds)
  2. The evidence — frequency data, representative quotes, and the pattern across participants (2–3 minutes)
  3. What it means — implications for the decision at hand (1 minute)
  4. The methodology — in an appendix, available for those who want it, but not in the flow

This is the reverse of how most research reports are written (methodology first, findings last). In a presentation context, the audience needs the conclusion before they can evaluate whether the evidence matters to them.

Use frequency language deliberately

"Many customers mentioned X" is less credible than "9 of 15 customers we spoke to mentioned X — including customers across segments, company sizes, and tenure levels."

Frequency language gives sceptical audiences something concrete to evaluate. It also forces analytical rigour on you: if you cannot say approximately how many participants reflected a given theme, you probably do not know whether it is a pattern or an outlier.

Some conventions:

  • Use absolute numbers when the sample is small ("5 of 12" rather than "42%")
  • Use percentages when the sample is larger (25+ participants) and the precision is meaningful
  • Add context to frequency claims: "including 4 of 5 enterprise customers" or "across all three geographic markets"
  • Be honest about exceptions: "11 of 15 mentioned this — the four who did not were all in the first year of their contract"

Skimle's thematic analysis surfaces frequency information as part of the analysis output — how many documents each theme appears in, and the weight of different themes relative to each other. This makes it straightforward to add frequency claims to findings that are grounded in the actual data.

Use direct quotes strategically

A well-chosen direct quote does more work than any frequency claim. It makes an abstract theme concrete, gives the executive the feeling of hearing directly from a customer rather than through a researcher's summary, and is far more memorable than a data point.

Effective use of quotes:

  • Lead with the finding, use the quote as evidence. "Customers do not trust the accuracy of the automated reports. One customer put it directly: 'I stopped looking at the dashboard because I found an error once and now I assume there are more.'"
  • Choose quotes that are specific, not generic. "The product is hard to use" is a weak quote. "I spend the first 20 minutes of every session trying to remember where things are" is a strong one.
  • Use 2–3 quotes per key finding, not one (a single quote can always be dismissed as an outlier).
  • Anonymise appropriately — preserve enough context for the quote to be credible (role, company size, tenure) while protecting individual identity.

Connect qualitative findings to quantitative data

The most persuasive qualitative findings are ones that help explain something in the quantitative data. "We can see from our NPS data that scores drop in month 3 — the interview data tells us why: customers hit a complexity wall when they try to set up their first automated workflow."

This framing positions qualitative research not as an alternative to quantitative data but as the explanation layer beneath it. Every quantitative finding has a "why" that only qualitative data can answer. Framing your research as providing that explanation increases its perceived value enormously.

The NPS verbatim analysis guide covers the specific case of connecting open-text data to NPS scores. The same principle applies to engagement survey data, churn metrics, product analytics, and sales win rates.

Address the generalisability question directly

When the question "how many people said that?" actually means "is this representative?", give a real answer:

"Qualitative research is designed to understand why and how, not to estimate the proportion of the population who feel this way. What we can say is that this pattern appeared consistently across participants with different profiles, roles, and contexts — which is good evidence that it reflects a real phenomenon rather than one person's idiosyncratic experience. To understand the prevalence across all customers, we would need to run a quantitative survey — and the findings here give us exactly the hypotheses that survey should test."

This response validates the concern, explains what the research can and cannot show, and positions a follow-on quantitative study as the natural next step — rather than defending qualitative methods in the abstract.

Build the audit trail

Executives who distrust qualitative findings often want to know: could I verify this if I wanted to? The answer should be yes.

Skimle's analysis output maintains full traceability — every finding links to the quotes that support it, and every quote links to the source document. When you export findings, the supporting evidence travels with them. This means that if an executive wants to verify a specific claim, they can see exactly which participants said what.

This kind of traceability is what distinguishes structured qualitative analysis from impressionistic summarisation. It is also what allows findings to survive the challenge of "that sounds like researcher bias to me" — because the evidence is auditable.

See the export workflows guide for how to produce structured export formats from Skimle that carry the evidence trail into PowerPoint, Word, or Excel.

What to do when executives still push back

Sometimes the pushback is not really about method — it is about the finding. An executive who is uncomfortable with a conclusion about their team's effectiveness may find methodological objections as a way to avoid engaging with the content.

In those situations, no amount of methodological rigour will resolve the conversation. The question to ask is: "What evidence would change your view?" If the answer is "nothing, because I know this is not an issue," you are in a political conversation, not a research conversation. Some advanced consulting firms, like Noren in Finland, are doubling down on methods to ensure executives get immersed in qualitative data no matter how controversial it is. One method they use is having client leaders listen to customer interviews first hand, so that there is zero filtering they can challenge.

For the cases where the pushback is genuine, the most effective move is to offer to run a follow-on quantitative study to test the qualitative hypothesis. This meets the executive where they are and produces data that will either confirm or challenge the qualitative finding.

Want to produce qualitative findings with the evidence trail executives need? Try Skimle for free and generate structured, traceable analysis from your interview data.

Related reading:


About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. Google Scholar profile

Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand their markets, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile


Sources