How to synthesise user research: turning 20 interviews into a clear story

A practical guide to user research synthesis: how to move from raw interview transcripts to structured findings and a stakeholder-ready narrative.

Cover Image for How to synthesise user research: turning 20 interviews into a clear story
Share this article:

To synthesise user research from interviews, follow five steps: (1) organise your raw data into a consistent format, (2) code transcripts for recurring themes, (3) cluster codes into findings, (4) write insight statements that make each finding actionable, and (5) build a narrative that connects findings to the question you set out to answer. Tools like Skimle handle steps 2 and 3 automatically, so researchers can spend their time on interpretation and storytelling rather than manual tagging.

Synthesis is the step that most research guides skip over. You will find plenty of advice on writing good interview questions or recruiting the right participants — but the hard part, the step where 400 pages of transcripts become a clear and defensible account of what users think, is often left as an exercise for the reader. This guide fills that gap.

Why synthesis is hard

If you have ever stared at a folder of interview transcripts wondering where to start, you are not alone. The problem is not a lack of method — most researchers know what thematic analysis is. The problem is the cognitive load of holding 20 conversations in your head at once while trying to see what they add up to.

Manual synthesis involves reading every transcript, marking interesting passages, transferring notes to sticky notes or a spreadsheet, grouping similar notes, naming the groups, and then writing up what those groups mean. For 10 interviews this takes a day. For 30 interviews, it can take a week — and by the time you finish coding the last transcript, you have half-forgotten the first.

The result is that synthesis is often under-resourced. Teams cut corners, trust their intuition about what they heard, or delegate the write-up to someone who did not do the interviews. None of these produces findings that stakeholders can rely on. I hate to admit it, but when still working in consulting we would often need to produce reports for the next day meeting meaning the underlyings synthesis relied more on expert opinion peppered with quotes than an actual synthesis of what the evidence was telling us.

Step 1: Organise your raw data

Before you start analysis, get your data into a consistent shape. This means:

  • Transcripts in a readable format. If you recorded interviews, use a transcription tool to convert audio to text. Review the transcript for obvious errors — automated transcription is usually 90–95% accurate, but names, technical terms, and accents produce errors that matter. Read our guide for a practical end-to-end setup for recording and transcribing interviews.
  • One document per interview. Keep each interview as a separate file. This lets you track which participant said what throughout the analysis — critical for avoiding the mistake of treating one outlier's view as a general finding.
  • Basic metadata noted. Record who each participant was: their role, the date, and any other relevant attributes (team size, industry, tenure, whatever is relevant to your research question). This becomes important when you want to slice findings by segment.

If your data is already in a tool like Dovetail or Notion, export it to individual documents before importing for analysis. Mixed-format data is harder to process consistently.

Step 2: Code for themes

Coding means reading through your data and labelling segments of text with the theme or topic they represent. A code might be as simple as "onboarding friction" or as specific as "confusion about billing on mobile."

There are two broad approaches:

Inductive coding — you let the themes emerge from what participants actually said, without imposing a framework in advance. This is the right approach for exploratory research where you genuinely do not know what you will find.

Deductive coding — you bring a pre-existing framework (perhaps a set of hypotheses, a customer journey map, or a product area taxonomy) and code every passage against it. This is faster and more consistent, but risks missing things that fall outside your framework.

Most real research projects combine both. You start with a rough framework of what you expect to find, then let the data surprise you with things that do not fit.

For a full treatment of the coding decision, see how to code qualitative data.

In practice: For 10 interviews, manual coding in a spreadsheet is feasible. For 20 or more, the volume makes it worth using a tool. Skimle's automatic thematic analysis processes all your transcripts in one step and produces a category hierarchy with supporting quotes — giving you a starting structure you can then refine, rather than starting from a blank page. The inductive analysis mode lets you build the structure yourself with AI assistance at each step.

Step 3: Cluster codes into findings

Once you have coded your data, you will have a list of codes — possibly a long one. The next step is to identify which codes belong together and what higher-level finding they represent.

This is the step that most resembles affinity mapping. You are looking for patterns across codes: codes that keep appearing together, codes that seem to describe the same underlying phenomenon from different angles, codes that are actually subcategories of a broader theme.

Ask yourself: if you had to explain this cluster of codes to someone in one sentence, what would you say? That sentence is your finding.

A finding is different from a code. A code describes what you observed ("onboarding confusion", "difficulty finding help content"). A finding interprets what it means ("new users cannot self-serve through onboarding without support, which creates a bottleneck in week one").

Good findings are:

  • Specific — they describe a particular pattern, not a vague impression
  • Supported — they are backed by quotes from multiple participants, not just one
  • Actionable — they point towards a decision, even if they do not make it

Step 4: Write insight statements

An insight statement is a concise, quotable expression of a finding that can stand alone in a presentation or document. It typically follows this structure:

[Who] [does/thinks/feels] [what], because [reason], which means [implication].

For example: "Junior PMs struggle to distinguish discovery from delivery research because the two look similar on the surface, which means they often conduct the wrong type of study for the decision they are trying to make."

Writing good insight statements forces you to be precise. If you cannot complete the "which means" clause, the finding may not yet be actionable — keep refining.

You do not need more than 5–8 top-level insights from most research projects. If you have 20 findings, you have themes, not insights. Group them further.

Step 5: Build the narrative

A synthesis document is not a list of findings. It is a story about what you learned, structured so that a reader who did not do the research can understand what matters and why.

The structure that works best for most stakeholder audiences:

  1. The question you set out to answer — one sentence
  2. Who you spoke to — participant count, roles, methodology
  3. The three to five top-level findings — each with supporting evidence
  4. What it means — implications for the decision at hand
  5. What you did not answer — honest about limitations and open questions

Keep the main document short — four to six slides or one page. Appendices can carry the supporting quotes, full participant list, and methodology detail for anyone who wants to dig deeper.

For advice on presenting this to audiences who are sceptical of qualitative methods, see how to present qualitative research findings to executives.

Common synthesis mistakes

Treating frequency as importance. The fact that 14 of 20 participants mentioned a feature does not automatically make it the most important finding. One participant might have identified a critical failure mode that the others missed or normalised. Frequency matters, but it is not the whole story.

Losing the individual. When you aggregate findings, you risk losing the people behind them. Good synthesis keeps individual quotes visible — not just as decoration but as evidence. If a finding cannot be supported by two or three specific quotes, it is probably a generalisation, not a finding.

Synthesis by committee. Involving too many people in the synthesis step produces mush. One or two people who did most of the interviews should own the synthesis. Others can review and challenge it, but the primary interpretation should come from the people closest to the data.

Reporting what was said, not what it means. A synthesis that is just a list of paraphrased responses ("participants said they wanted better notifications") is not synthesis — it is a summary. Synthesis means interpretation: why do they want better notifications, what does that reveal about their mental model, and what should we do about it?

Making synthesis faster without making it worse

The research community's main concern about AI-assisted synthesis is that it produces generic outputs that miss the nuance. That concern is valid for tools that treat qualitative analysis as a search problem (find relevant passages and summarise them). It is less valid for tools that apply a structured thematic analysis methodology.

Skimle processes your transcripts through a structured analysis pipeline that builds a category hierarchy, links every insight to its source quote, and maintains the traceability that makes findings defensible. The researcher's job becomes reviewing the structure, refining the categories, and writing the narrative — rather than the mechanical work of reading and tagging every line.

For a practical walkthrough of the analysis workflow, see how to analyse interview transcripts and discovering themes using metadata variables for segmenting findings by participant attributes.

Ready to try it on your next research project? Start with Skimle for free and run a full synthesis on your interview transcripts in minutes.

Related reading:


About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. Google Scholar profile

Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand their markets, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile


Sources