Qualitative research for consultants means running interview programmes under tight deadlines, billing those hours to a client, and then standing up in front of a sceptical audience to defend what the data says. Skimle was built with exactly this context in mind: fast enough to fit a two-week engagement, rigorous enough to survive a partner review. This guide covers the tools, workflows, and practical decisions that make qualitative research work in a consulting context rather than an academic one.
The consultant's unique constraints
Academic researchers have the luxury of time. A doctoral student might spend a year collecting and analysing interviews. A consulting team typically has two to four weeks from kick-off to client presentation, and every hour spent on analysis is an hour that appears on the invoice.
That constraint changes everything about how you approach qualitative work. It does not mean you can cut corners on rigour, but it does mean you need a workflow that moves at a different pace. It means your synthesis needs to be defensible on the day you present it, not after months of reflection. And it means the tools you choose have to be designed for speed without sacrificing the interpretive depth that makes qualitative findings worth presenting.
There is also a second constraint that academics rarely face: you need to be able to explain your methodology to a client who is paying for it. "We interviewed fifteen people and identified themes" is not good enough when the client's CFO pushes back on your market sizing or their strategy director questions whether your customer findings generalise. You need to be able to say, with confidence, how many interviews you conducted, what you were looking for, and why your interpretation is sound.
Both of these pressures, speed and defensibility, shape every decision in the consulting research workflow, from how you design the interview guide to how you present findings.
Types of qualitative work consultants do
Not all consulting qualitative research is the same, and the differences matter for how you design the work.
Commercial due diligence (CDD) is probably the most intensive. A private equity firm wants to acquire a company and needs to understand the market, the competitive dynamics, and the target's position within the next two to three weeks. You will typically be interviewing customers, former customers, competitors, and industry experts, often twenty to forty people or more across multiple segments. The stakes are high: your findings directly inform an investment decision worth tens or hundreds of millions. Read more on our consultants and investors use case page.
Voice of customer (VoC) research sits inside strategy or product engagements. A client wants to understand what their customers actually value, what frustrates them, and what they are looking for that they are not getting. The interview count is usually smaller, ten to twenty, but the interpretive demands are high: you need to identify patterns, surface latent needs, and turn those into strategic implications.
Organisational diagnostics are qualitative work turned inward. You are interviewing a company's own employees, managers, or leadership to understand how decisions get made, where the bottlenecks are, or why a transformation is not sticking. The sensitive nature of the material adds an extra dimension to analysis and reporting.
Market entry and competitive intelligence combine desk research with interviews of distributors, regulators, local experts, and potential customers. The challenge here is triangulating across sources with very different frames of reference.
Each of these contexts demands the same core skills: designing questions that get beyond surface answers, listening for what is not being said, and organising raw interview data into a coherent narrative. What differs is the pace, the stakes, and the audience.
The traditional workflow and its time cost
Before AI-assisted analysis existed, the consulting qualitative workflow looked something like this.
Interviews were conducted over the phone or in person, with a note-taker typing furiously alongside the interviewer. Notes were sent back to the project team, who read through them manually. Senior team members might skim the notes from every interview looking for patterns. Someone would build a spreadsheet, trying to track which themes came up in which interviews. Eventually a synthesis document would emerge, its structure debated back and forth between team members with different interpretations of what they had heard.
For a twenty-interview CDD, this process typically consumed three to five full working days of analyst time, more if the note quality was uneven or if the themes were genuinely complex. On a four-week engagement, that is a substantial share of your total workload arriving at the worst possible moment, when you also need to be building the financial model and preparing the client presentation.
The result was often a compromise. Teams would interview fewer people than they ideally wanted to, knowing the analysis would consume time they did not have. They would rely more heavily on the most recent interviews, which were freshest in memory. They would sometimes conflate "the view of the people we spoke to" with "the view of the market", because there was not enough time to interrogate the data carefully for exceptions and outliers.
It was not that consultants were being sloppy. It was that the economics did not support the ideal workflow.
How AI changes the economics
The shift that AI brings to qualitative analysis is primarily about time compression, and the implications are significant.
With modern AI-assisted tools, the analytical work that used to take three to five days can be done in hours. Transcripts or interview notes are processed automatically. Initial theme identification happens in minutes rather than days. The analyst's job shifts from "reading everything and trying to remember patterns" to "reviewing and challenging an initial synthesis, adding interpretive judgement, and refining the output."
This changes the economics of consulting qualitative work in two ways. First, it makes more ambitious interview programmes viable. If analysis time is no longer the binding constraint, you can do thirty interviews instead of fifteen without blowing your timeline. More interviews means more robust findings and a more defensible methodology.
AI also opens a third option that sits between traditional qualitative interviews and quantitative surveys: AI-conducted interviews at scale. Skimle Ask can run structured conversational interviews with large respondent populations — customers, employees, stakeholders — without requiring a human interviewer for each participant. The resulting data is richer than survey open-text boxes and more consistent than unmoderated qualitative interviews. For engagements where you need breadth as well as depth (a voice-of-customer study across hundreds of accounts, a market entry assessment across multiple geographies), this is a genuinely new instrument. It compresses what would previously have required a mixed-methods design into a single workflow, and the output feeds directly into structured analysis. Skimle's agentic chat and MCP capabilities extend this further, enabling qualitative insights to flow automatically into your broader AI tool environment.
Second, AI analysis frees up senior time for the work that actually requires seniority: the interpretive judgements, the framing of findings for the client, and the translation of interview data into strategic recommendations. Junior analysts can manage the data processing. Partners and managers can focus on insight.
This is not purely a story about efficiency. It is also a story about quality. Faster processing means you have more time for the analytical work that requires genuine expertise. As we have argued elsewhere, quality is the real differentiator in the era of AI, and the way to deliver it is to spend less time on mechanical tasks and more time on judgement.
The key caveat is that not all AI tools are built for this kind of work. A general-purpose language model will summarise your interview notes, but it will not help you build a structured, defensible analytical framework. Generic retrieval approaches fail at the kind of structured synthesis that consulting analysis requires. What you need is a tool that combines AI processing with a methodology designed for qualitative research.
Tool choices for consultants
The practical choice for most consulting work comes down to two options.
Generic AI tools — standard LLM interfaces, ChatGPT, Copilot — will process interview notes and produce a summary. The limitation is that there is no methodology behind it. No structured coding, no traceability from finding to source, no consistent framework applied across all interviews. "An AI summarised our interviews" is a very different statement from "we conducted structured thematic analysis and here is what the data shows." Clients and partners will probe that difference, usually at the moment it is most inconvenient. For a full explanation of why this architecture falls short for structured qualitative work, see why generic AI approaches fail at serious analysis.
Skimle was designed for the kind of structured, fast-moving qualitative research that consultants and investors run. Upload your notes or transcripts, Skimle builds a structured theme hierarchy across all of them, every finding links back to source quotes, and you refine the output rather than build it from scratch. The methodology is embedded in the tool rather than something you have to impose on top of it — which matters when you have two weeks, not two months. The features page covers the full workflow, and the tools comparison situates it relative to academic and UX research tools.
The choice of tool is not just about speed. It is about whether the output is something you can stand behind in front of a client.
Interview guide design for consulting projects
The interview guide is where consulting qualitative research either succeeds or fails. A well-designed guide produces comparable data across interviews. A poorly designed one produces twenty conversations that are hard to synthesise because they covered different ground.
The core principle is to separate the question structure from the conversational flow. You need enough structure that your data is comparable, but enough flexibility that you can follow interesting threads when they emerge. A useful approach is to organise the guide around four or five key information goals rather than a list of questions. Under each goal, you have two or three prompts that ensure you cover the ground, but you are not locked into asking them in order.
In a commercial due diligence context, your information goals might be: understand the customer's overall view of the market, probe their experience with the target company and competitors, test specific hypotheses from the deal thesis, understand switching dynamics and loyalty, and explore forward-looking expectations. Each of those generates a handful of questions, but the structure ensures that every interview covers the ground you need.
The most common mistake in consulting interview guides is loading them with too many questions. Twenty questions in a forty-five minute interview means you are rushing through an agenda rather than having a conversation. Aim for ten to twelve at most, knowing you will not get through all of them in every interview, and let the earlier answers guide what you emphasise later in the conversation.
Note-taking versus transcription
Consultants have historically relied on note-taking rather than transcription, for reasons that are partly practical and partly cultural. Expert interviews, the mainstay of CDD and strategy work, often involve confidential market information that interviewees share on the understanding that it will be used discreetly. Recording and transcribing those conversations can feel like a higher-stakes commitment than note-taking, even when the research purpose is the same.
That said, transcription, whether AI-generated or human, produces much richer data and makes analysis substantially easier. The question is whether the research context supports it.
A practical middle ground, and the one many consulting teams have converged on, is to take detailed notes during the interview and then write them up fully within a few hours of the conversation, while the memory is still fresh. This produces a document that is richer than real-time notes but avoids the formality and logistics of recording. When the conversation is loaded into an analysis tool, the quality of the raw material makes a significant difference to the quality of the output.
If you do record and transcribe, AI transcription tools have become fast enough and accurate enough to make the process unobtrusive. The analysis step that follows is where the specialised tool matters.
How many interviews do you need?
This is the question every consulting project faces, and the answer depends on the purpose of the research, the heterogeneity of the population, and the time available.
For a commercial due diligence, fifteen to twenty-five interviews is a reasonable minimum for a well-defined customer segment. If you are covering multiple segments or multiple stakeholder types, you may need forty or more in total. The guiding principle is theoretical saturation: you keep interviewing until new conversations are no longer adding new themes. In practice, for a well-scoped CDD, you reach a reasonable approximation of saturation at around fifteen interviews per homogeneous segment.
For voice of customer work in a strategy engagement, ten to fifteen interviews is often sufficient if the customer base is relatively homogeneous and the questions are well-defined. Fewer than ten is risky because outliers can distort the overall picture.
Organisational diagnostics need to reflect the structure of the organisation. If you are studying a department of fifty people, ten to fifteen interviews gives you reasonable coverage. If you are studying an organisation with distinct functional silos, you need representation from each silo.
The most important thing is to be explicit about your sampling rationale. "We interviewed fifteen customers across three segments, selected to represent the range of company sizes and tenure" is a defensible methodology. "We interviewed whoever was available" is not, even if the actual sample was similar.
Analytical method: thematic analysis for consulting
Thematic analysis is the most widely applicable qualitative method for consulting work, and the one that translates most readily into client-facing outputs. The basic process involves reading across your interview data, identifying recurring patterns, organising those patterns into higher-level themes, and then interpreting what those themes mean for the client's question.
What makes consulting thematic analysis different from academic thematic analysis is the direction of travel. Academic researchers often approach data inductively, letting themes emerge from the material without a prior framework. Consultants typically have a set of hypotheses to test, a deal thesis to validate, or a strategic question to answer. The analytical work involves both testing those prior hypotheses against the data and remaining genuinely open to findings that do not fit.
The risk of hypothesis-driven analysis is confirmation bias: hearing what you expected to hear rather than what the data actually says. The safeguard is rigour in how you handle disconfirming evidence. Every theme in your synthesis should be tested against the interviews that do not fit it. If two of your fifteen interviewees said the opposite of the pattern you are calling a "strong signal", you need to understand why before you present the theme as robust.
Using AI in the analytical workflow
AI-assisted analysis does not replace the interpretive judgement of a skilled analyst. What it does is handle the mechanical work: processing large volumes of text, identifying candidate themes, grouping related passages, and producing initial summaries that a human analyst can then review and refine.
The right way to use AI in consulting qualitative analysis is as an accelerator for structured thinking, not as a substitute for it. The analyst sets the analytical framework, reviews the AI's initial categorisation, challenges themes that seem superficial or misleading, and applies the judgement that comes from understanding the client context.
The wrong way is to treat AI output as a finished analysis. A language model that has read your interview notes does not understand the deal thesis, the competitive dynamics you are investigating, or the specific strategic question the client needs to answer. That understanding has to come from the analyst.
Presenting qualitative findings to clients
The hardest part of consulting qualitative work is not the analysis. It is the translation from "what we heard" to "what this means for you."
Clients do not want a summary of interview themes. They want to know what to do. That means your analytical output needs to connect the data to a recommendation or a strategic implication. "Customers consistently cited X as their primary frustration" is a finding. "This frustration creates a specific vulnerability for the target, because a competitor has recently addressed it" is an insight. The difference is in the interpretive step.
Structuring qualitative findings for a client presentation requires making choices about emphasis that are not purely mechanical. Some themes will be more strategically significant than others. Some findings will complicate a clean narrative and need to be handled carefully, acknowledged without burying them. The most credible qualitative presentations are ones where the consultant can say, confidently, "here is the range of views we heard, here is what the dominant pattern was, and here is our interpretation of what it means."
Quotes are an essential tool for this. Verbatim quotes give qualitative findings an immediacy and authority that paraphrased summaries lack. A well-chosen quote from a customer describing their frustration with a target company is more persuasive than three bullets summarising the theme. The discipline is in selecting quotes that are genuinely representative rather than cherry-picked, and in being transparent about that selection.
Anticipate the pushback. Senior clients and partners will probe qualitative findings in ways they might not probe a financial model. "Is that really representative?" "Could that just be one difficult customer?" "Did anyone say the opposite?" Having clear answers to those questions, and ideally having addressed them in the presentation itself, is what makes qualitative analysis credible in a consulting context. Our guide on presenting qualitative research findings to executives covers the specific challenges of translating qualitative evidence for non-specialist audiences.
Why rigour still matters even in consulting
There is a temptation, under time pressure, to be less careful about methodology than you would ideally be. The rationalisation is that it is "just" qualitative work, that the interviews are illustrative rather than statistical, and that the client is not expecting academic standards.
This rationalisation is wrong, for two reasons.
First, clients and their advisers are increasingly sophisticated about research quality. A legal or regulatory process, a contested deal, or a strategy that does not deliver can put your methodology under scrutiny months after you presented it. "We conducted interviews with fifteen customers" needs to hold up to questions about who those customers were, how they were selected, and whether the interpretation was sound.
Second, the value of qualitative research in consulting is precisely that it provides depth of understanding that quantitative data cannot. If that depth is undermined by sloppy methodology, you are left with a weaker version of both: not statistically robust and not interpretively careful. The rigour is what makes the qualitative findings worth paying for.
The good news is that rigour does not require slowness. With the right tools and workflow, you can conduct a methodologically sound thematic analysis on twenty interviews in a day rather than a week. The time saved is not time taken from rigour; it is time returned to the interpretive work that requires human expertise.
Putting it together: a consulting qualitative workflow
A practical workflow for a consulting qualitative project looks something like this.
Before interviews begin, define the analytical framework. What are the key questions you need the interviews to answer? What hypotheses are you testing? Document this explicitly, because it will shape both your interview guide and your analysis.
During the interview phase, take detailed notes or record and transcribe. Write up notes fully within a few hours. Load each interview into your analysis tool as it is completed, so the analytical work can begin in parallel with the interview programme rather than waiting until the last interview is done.
As the interview programme progresses, review emerging themes regularly. Are the patterns you expected materialising? Are there surprises? Adjust your interview guide for later conversations if new themes require deeper exploration.
Once the interviews are complete, finalise the thematic analysis. Challenge every theme against the full dataset. Identify the outliers and understand them. Select quotes that illustrate each theme clearly and representatively.
Build the synthesis into client-facing output. Connect each theme to a strategic implication. Anticipate the pushback. Make the methodology transparent enough that a sceptical audience can understand how you got from interviews to conclusions.
The whole process, for a twenty-interview CDD, should take four to six days of analytical work with the right tools. Without them, it would take ten to fifteen. For a concrete example of this workflow in practice, see the Noren case study, where a strategy consultancy used Skimle to run a structured, rapid qualitative workflow to get to novel insights. For CDD-specific guidance, see qualitative analysis in commercial due diligence.
Ready to run qualitative research at consulting speed? Try Skimle for free and see how AI-assisted analysis lets you tackle ambitious interview programmes without sacrificing the rigour clients expect.
Want to learn more? Read our guides on how to conduct effective business interviews, how many interviews you need for qualitative research, and presenting qualitative research findings to executives.
About the authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
