To analyse customer interviews effectively: transcribe and clean your raw recordings, then read through all transcripts before touching a single code. Develop a categorisation scheme — around 10 to 15 broad themes — then code systematically across every interview. Look for patterns across codes, validate your findings against contradictory evidence, and finally synthesise into a concise set of insight statements you can present to stakeholders. Tools like Skimle handle the mechanical coding step automatically, processing your full set of transcripts and organising every passage by theme, so you can spend your time on the interpretation that actually requires human judgement.
Customer interviews are one of the richest sources of market intelligence available to researchers and analysts. They capture nuance, reveal motivation, and surface the unexpected. But that richness is also what makes them hard to analyse. Unlike survey data, where you can run a frequency table and call it a day, interview data demands real interpretive work. This guide walks through that work step by step, covering the common pitfalls and practical shortcuts that matter most for market researchers and strategy professionals.
Why customer interviews are different from other research
Survey data is structured by design. You asked a specific question and got a specific answer. Customer interviews are different. You followed a guide, but the conversation went where it went. One respondent spent fifteen minutes on a topic you considered minor. Another deflected every question about pricing and volunteered a story about a competitor you had never heard of. A third told you one thing and then contradicted it twenty minutes later.
This unstructured quality is a feature, not a bug — it is precisely how you discover what you did not know to ask about. But it creates real challenges for analysis. You cannot simply count responses the way you would with a Likert scale. You have to read, interpret, and make judgements.
Customer interviews are also often partial. Respondents rarely share everything they know. They self-censor on sensitive topics (pricing, competitor preference, internal politics), they forget context that would be useful, and they sometimes give you the polite version of events rather than the real one. Good analysis accounts for what was not said, not only what was.
Finally, customer interviews are discovery-oriented. They are most valuable when you approach them with genuine curiosity rather than a fixed hypothesis. That orientation, however, makes the analysis harder: you have to remain open to themes you did not expect, rather than simply tallying evidence for or against a position you already hold.
For a discussion of when interviews are the right method versus group discussions, see focus groups vs individual interviews.
Step-by-step: how to analyse customer interviews
Step 1: Transcribe and prepare your data
Analysis starts before you read a word of content. Get your recordings transcribed — automated tools are accurate enough for most purposes, though names, technical terms, and strong accents will produce errors you need to catch. Review each transcript against the audio for anything that looks wrong, and add speaker labels if the transcription tool has not done so. Skimle includes built-in transcription, so you can go straight from audio file to coded analysis without switching tools — see how Skimle's transcription works for details on accuracy and supported formats.
Then standardise the format. Each interview should live in a separate document. Add a brief header recording the respondent's role, company size, segment, and interview date. This metadata is invisible during transcription but invaluable during synthesis, when you want to know whether a particular concern is concentrated among small businesses or enterprise accounts. See discovering themes using metadata variables for how to make the most of this in practice.
If you have notes rather than full transcripts, convert them into prose. Bullet points are hard to code consistently — you lose the context that makes a passage interpretable.
Step 2: Read before you code
The most common mistake in customer interview analysis is jumping straight to coding. Before you touch a highlighter or create a single tag, read through every interview once without trying to categorise anything. Take high-level notes on initial impressions. Notice what keeps coming up, what surprises you, and what contradicts your expectations.
This first pass takes time, but it pays back with interest. When you code systematically in the next step, you will already have a mental map of the terrain. You are much less likely to over-index on memorable quotes from early interviews and miss patterns that only become visible once you have seen the full set.
If you are working with twenty or more interviews, this immersion step feels slow. Do it anyway.
Step 3: Develop your coding scheme
Coding means assigning labels to meaningful passages of text. A code might be "unmet need around onboarding" or "scepticism about AI accuracy" — specific enough to be meaningful, broad enough to capture multiple variations of the same idea.
Manual approach: Start with ten to fifteen codes derived from your research questions and initial impressions. Avoid the temptation to create fifty narrow codes upfront — you can always split a code later, but starting too granular means you lose the ability to see patterns. Keep a brief codebook that documents what each code covers and what it excludes. You will forget the distinctions if you do not write them down. Let codes emerge from the data as well as from your framework. If three respondents independently raise a topic you had not anticipated, that warrants a new code even if it was not in your original discussion guide.
Alternative with Skimle: Rather than building the coding scheme yourself before touching the data, you can let Skimle discover the structure for you. Upload your transcripts and Skimle processes every document systematically, surfacing an initial category structure with supporting passages already grouped. You then take control: merge categories that overlap, split any that are too broad, rename codes to match your language, and add codes for anything the AI has missed. The result is a fully coded dataset you own and can interrogate — without spending days constructing the scheme from scratch. This is particularly useful when you are not sure upfront what the key themes will be, which is often the case in exploratory customer research.
For a deeper treatment of coding approaches, see how to code qualitative data and the broader guide to thematic analysis.
Step 4: Code systematically across all interviews
With your scheme in place, work through each interview in turn. Tag relevant passages with the appropriate codes — a single passage can carry multiple codes, and that is expected. What you want to avoid is coding inconsistently: applying a code to a passage in interview seven that you would not have applied in interview two.
The systematic part matters. It is the difference between analysis and impression management. When every relevant passage is tagged, you can pull all quotes for a given code and see the full picture — not just the two or three quotes that happened to stick in your memory.
Tools like Skimle handle this step by processing all your transcripts in one pass and suggesting an initial code structure with supporting passages already grouped. That gives you a starting structure to refine rather than a blank page to fill. Skimle works across languages, which is particularly useful for market research projects spanning multiple countries.
Step 5: Identify patterns and validate findings
With all your interviews coded, look for patterns across codes. Ask: do all respondents in one segment say one thing while those in another segment say something different? Does concern about a particular issue peak among respondents who are late in the buying cycle? Are there outliers — respondents whose view is directly opposite to the consensus — and can you explain why? Metadata variables (respondent role, company size, segment, interview date) are what make this kind of cross-cutting analysis possible — see discovering themes using metadata variables and how to analyse open text responses at scale for practical guidance on both.
This is also the step where you actively look for disconfirming evidence. If your emerging theme is "customers are frustrated by slow response times," go back through your coded passages and find every instance where a respondent expressed satisfaction with response times or said it was not a priority. Can you explain the discrepancy? If you cannot, your theme may be weaker than it looks.
The rigour of this step is what separates defensible research from selective quotation. Anyone can find three quotes that support a pre-existing conclusion. Good analysis tests conclusions against the full dataset.
For guidance on how many interviews you need before patterns become reliable, see qualitative research sample size.
Step 6: Synthesise into insights
Patterns are not yet insights. An insight is an interpretation: what does this pattern mean, why does it matter, and what should someone do about it?
For each major theme, write an insight statement that follows this structure: who does or experiences what, because of what underlying reason, which means what for the decision at hand. Force yourself to complete the "which means" clause. If you cannot, the finding is not yet actionable.
You do not need more than five to eight top-level insights from a typical project. If you have twenty, you have themes — keep grouping. A good synthesis document has a brief methodology section, the top-level insights with supporting quotes, and an honest account of what you did not resolve.
For practical advice on turning a set of findings into a coherent narrative, see how to synthesise user research findings.
Common pitfalls in customer interview analysis
Confirmation bias
You had a hypothesis before the research started. Now you are reading transcripts and you are finding evidence for it everywhere. This is almost certainly a sign that you are seeing what you expected rather than what is there.
The antidote is deliberate negative case analysis: before finalising any theme, go back through your data specifically looking for passages that contradict it. If 18 respondents said one thing and two said the opposite, your theme holds — but only if you can explain why those two diverged. If you cannot explain the outliers, your theme needs refinement.
Over-indexing on memorable quotes
Vivid quotes stick in memory. The respondent who said "honestly, your competitors are eating your lunch on this" will stay with you long after the quieter respondents have blurred together. But vivid does not mean representative.
When building your findings, track frequency explicitly — how many respondents mentioned this theme, prompted or unprompted. An emotional quote from one person is illustrative evidence, not a finding. A pattern repeated across ten respondents from different contexts is a finding.
Ignoring silent data
Silence in an interview is data. If you asked every respondent about pricing and half of them changed the subject quickly, that evasion tells you something. If nobody mentioned a product feature you assumed was important, that absence is worth noting.
Good analysis asks not only "what did respondents say?" but "what did they conspicuously not say?" and "where did the conversation go quiet?" These gaps often point to the most sensitive or revealing territory.
Treating synthesis as a second read
Some researchers finish coding and then write up their findings based on memory rather than going back to the coded data. The result is a synthesis that reflects their impressions of the interviews rather than what the data actually shows.
Synthesis should be done with your coded data in front of you. Pull every passage tagged with a given code and read them together before writing a single word of your findings. This keeps the analysis grounded rather than reconstructed from recollection.
How many customer interviews do you need?
The honest answer is: it depends on what you are trying to learn, and you will often not know until you are already collecting data. Saturation — the point at which new interviews stop producing new themes — typically occurs somewhere between 12 and 25 interviews for a focused research question, but that range varies significantly with the homogeneity of your population and the breadth of your topic.
For market research specifically, a useful rule of thumb is to plan for 15 to 20 interviews per major segment and watch for saturation. When two or three consecutive interviews produce no new themes, you have probably collected enough data on that segment.
For a full discussion of sample size in qualitative research, including segment-based sampling strategies, see qualitative research sample size.
For consultants and investors doing primary research under time pressure, it is also worth considering whether a smaller, more targeted set of interviews — six to ten per question — can answer the specific decision at hand, even if it would not support broader claims about the market.
How to present findings to stakeholders
Most stakeholders who commission customer interviews are not qualitative researchers. They will not care about your coding scheme, and they may be sceptical of any finding that cannot be expressed as a percentage. That scepticism is manageable if you present with rigour and transparency.
A few principles that work:
Lead with the insight, not the method. Start with what you found and why it matters, not with a methodology section. Methodology belongs in an appendix for anyone who wants to scrutinise your approach.
Use representative quotes, not decorative ones. Every major finding should be illustrated by two or three quotes that show the range of how respondents expressed it — not just the most dramatic version. This demonstrates that the finding reflects a pattern, not one outlier.
Be explicit about frequency. Say "twelve of eighteen respondents raised this unprompted" rather than "several participants mentioned." Frequency language makes your findings feel concrete without overstating their generalisability.
Acknowledge what you did not resolve. Qualitative research rarely produces clean, unanimous findings. Showing that you noticed the contradictions and can explain them builds credibility with sceptical audiences. It signals that you are reporting what you found rather than what you hoped to find.
Skimle supports this kind of transparent presentation through two-way traceability: every insight can be traced back to the specific quotes that support it, and every quote can be read in full context. This is the kind of AI transparency that makes qualitative findings defensible to an audience that would otherwise treat them as anecdote.
For a full guide to presenting qualitative findings in a boardroom or investor context, see how to present qualitative research findings to executives.
If you are working on interview-based research for the first time or refining your fieldwork approach, how to conduct effective business interviews and how to write a perfect interview guide are worth reading before you begin data collection. The quality of your analysis is limited by the quality of your raw material, and good transcripts start with good fieldwork.
What good analysis looks like in practice
Consider a market research team at a B2B software company evaluating whether to enter a new vertical. They conduct 22 customer interviews across their target segment and use Skimle to process the transcripts. The initial code structure surfaces seven themes. When the team reviews the coded passages, one theme — "concerns about data portability" — appears in 16 of 22 interviews, consistently unprompted and with notable emotional intensity.
That pattern would be easy to miss in a manual review. It is not what the team expected to find, and none of the pre-research hypotheses flagged it. Because the analysis is systematic rather than impressionistic, the pattern is visible. The team can revise their go-to-market approach before launch rather than after.
The same rigour applies whether you are running 10 interviews or 50. Skimle handles projects of that scale, processes transcripts in any language, and maintains the traceability that makes findings credible. The analytical work — the judgements about what patterns mean — remains yours.
Ready to analyse your customer interviews faster and more systematically? Try Skimle for free and process your first batch of transcripts in minutes.
Want to go deeper? Read our guides on how to synthesise user research findings and how to present qualitative research findings to executives.
About the authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
