One of the most common questions researchers in academia and business alike face when planning qualitative studies is: "How many interviews should I conduct?" Whether you're an academic researcher designing a study, a consultant gathering expert insights for a strategy project or due diligence, or a market researcher exploring customer needs, determining the right sample size is crucial for both quality and efficiency.
The answer isn't a simple number - it depends on your research goals, available resources, the diversity of perspectives you're exploring, and how thematic saturation applies in your context. This guide walks through the key considerations and provides practical examples to help you determine the ideal number of interviews for your specific context.
Why qualitative research is different from quantitative: it's not about statistical significance
The first principle to understand is that qualitative research operates on fundamentally different logic than quantitative research. You're not trying to achieve statistical significance or make probabilistic claims about a population. Instead, you're trying to understand the full range of experiences, identify key themes, and develop rich insights into a phenomenon.
This means that depth matters more than breadth. A dozen in-depth, thoughtful interviews that explore topics thoroughly will almost always yield better insights than fifty superficial conversations that barely scratch the surface. Conducting effective interviews is a skill we've also written about earlier on Signal & Noise -blog.
The hierarchy of qualitative research quality
From worst to best:
- Worst: Large sample size (n=50+) but poor quality interviews—superficial questions, rushed conversations, limited follow-up, inexperienced interviewer. In this case, a simple quantitative survey could be much better, since you're likely not discovering any new insights anyways.
- Better: Moderate sample size (n=15-25) with decent quality—structured questions, adequate time, reasonable depth. This is the bread-and-butter approach common in many business and academic settings.
- Best: Right-sized sample (n varies, can even be below 10 in some cases as we explain below) with excellent quality—skilled interviewer, deep exploration, enough time, good setting and rapport, yielding rich and deep interview notes which are then properly analysed
The key insight: quality of interviews matters far more than quantity. Before you decide to conduct 40 interviews, ask yourself: do I have the time, money, and expertise to conduct 40 truly excellent interviews? If not, doing 20 exceptional interviews will often serve you better.
The resource reality check
If you're planning 40 interviews at 60 minutes each with high quality analysis afterwards, you might be tempted to consider "it can be squeezed to one full week...". Worst case, you bake this assumption to your project plan or commercial quote to the client...
In reality, each interview requires:
- Preparation time: Research, recruiting, scheduling, booking, pre-briefing, re-scheduling etc. (up to 1-2 hours per interview)
- Interview time: The conversation itself (30-120 minutes typically, 60 minutes the norm)
- Post-interview work: Notes, transcription, initial reflections, thank-you notes, possible syndication of notes etc. (1-2 hours per interview)
- Analysis time: Coding of insights, high-quality thematic analysis, synthesis (traditionally 1-3 hours per interview)
In reality, you're committing to roughly 200 hours of work, which will take over a month of your time. Be realistic about your available resources and optimise for depth within those constraints rather than chasing an arbitrary number.
You also need to consider how much time you have in-between the interviews. In academia, a set of interviews conducted in a condensed time period with the same routine questions can be too "shallow", lacking the depth and nuance that can be achieved with greater reflection time between the interviews.
Academic research: look at your target journal
If you're conducting academic research, one of the best ways to determine appropriate sample size is to examine recent publications in your target journal.
Qualitative research standards have evolved significantly. Where 10-15 interviews might have been typical in the 1990s and 2000s, contemporary studies often feature larger samples:
- Modest studies: 12-20 interviews (often acceptable for highly focused topics or hard-to-access populations)
- Standard studies: 20-40 interviews (typically the minimum sample in high-ranked journals)
- Large-scale studies: 50+ interviews (growing in frequency, particularly in leading journals)
For example, management journals like Academy of Management Journal, Organization Science, and Administrative Science Quarterly and other top tier papers have published qualitative studies with up to 50-100 interviews in recent years. Editors will quickly tell you it is not about the number of interviews, but about the quality and the depth of the analysis.
Why have academic samples grown larger?
Several factors explain this trend:
- Reviewer expectations: Reviewers increasingly expect substantial dataset, even if there is no direct legitimate justification for it.
- Longitudinal and comparative analyses: Studies often analyse findings over long periods of time, across multiple contexts, or through structured comparisons (e.g. success and failure cases).
- Theoretical contribution demands: Building robust theory requires evidence that patterns hold across diverse contexts.
- Rare phenomena: Some phenomena are rare and difficult to access, meaning that only a subset of your informants contribute meaningful insights.
- Competitive publishing: As qualitative methods have matured, the bar for rigour has risen. Having a large sample size is "proof of effort", lending credibility to the thoroughness of the article.
However, it's worth noting that quality journals still publish excellent studies with smaller samples when the research design justifies it. For example, a 2021 study by Mannuzzi et al. in Administrative Science Quarterly, arguably the top journal in the field, relied on only 28 interviews and 105 hours of observation to explore the development of improvisation skills in Live Action Role Playing (LARP) communities.
Business and applied research: multiple purposes beyond insights
In business contexts such as market research, organisational & strategy studies or due diligence, interviews serve multiple purposes beyond pure insight generation:
Purpose 1: Generating insights
The obvious goal: understanding customer needs, identifying problems, exploring opportunities, assessing market dynamics.
Purpose 2: Building commitment and buy-in
When you interview stakeholders about a change initiative, strategy development, or organisational issue, the interview process itself creates:
- Sense of being heard: People feel their perspective matters
- Awareness of the initiative: The interview educates them about what's happening
- Commitment to outcomes: People are more likely to support conclusions when they contributed input
For this purpose, you might interview more people than pure insight generation requires. If you need 30 people on board with a new strategy, interviewing all 30 creates buy-in even if actual insights saturate after 10...
Purpose 3: Customer development and sales (external interviews)
When consultants or salespeople interview potential customers, they're simultaneously:
- Discovering customer needs (research)
- Demonstrating expertise (relationship building)
- Identifying sales opportunities (business development)
In this context, more interviews can be justified even after thematic saturation, because each conversation can serve as a customer development opportunity.
Speed and timing considerations in business interviews
Sometimes doing fewer interviews faster is more valuable than more interviews slowly. Market conditions change, decision windows close, budgets get allocated.
A consultant who can deliver solid insights from 15 interviews in a week might provides more value than one who delivers marginally better insights from 40 interviews in eight weeks. Especially if the costs are factored in.
Thematic saturation: the key concept for determining sample size
The most important concept for determining how many interviews you need based on research considerations is thematic saturation—the point at which additional interviews stop yielding new themes or insights.
Once you've reached saturation, continuing to interview provides diminishing returns for insight generation (though it may still serve other purposes like building commitment, as discussed above).
What does saturation look like?
You've reached saturation when:
- New interviews confirm existing themes without adding new ones
- You can predict what interviewees will say based on patterns you've identified
- Your coding categories remain stable across multiple interviews
- No surprises or contradictions emerge from new data
Importantly, saturation is not about interviewing until everyone says the same thing—it's about interviewing until the range of perspectives and themes is adequately captured.
Some empirical researchers have transformed thematic satuaration from a theoretical concept to evidence-based practice. Guest, Bunce, and Johnson's influential 2006 study found that 92% of all themes emerged within the first 12 interviews when analysing 60 in-depth interviews with a homogeneous sample (). This finding—replicated across multiple subsequent studies—provides concrete guidance where previously only vague "it depends" answers existed. However, recent research reveals an important nuance: code saturation (identifying all themes) happens much faster than meaning saturation (deeply understanding those themes). Hennink, Kaiser, and Marconi's 2017 research found code saturation occurred at just 9 interviews, but achieving rich, nuanced understanding required 16-24 interviews. As they memorably put it: code saturation means researchers have "heard it all," but meaning saturation is needed to "understand it all."
In academic research, a key issue connects to the evolution of research question and focus. One colleague of Henri's confessed that he often finds the ultimate focus of his research projects after 20 or so interviews. One recent study Henri conducted started with a focus on organizational identity, but ended focusing on social purpose. Having more than 50 interviews and a wealth of archival material allowed the team to explore the topic in depth despite a slightly different initial focus.
References:
Factors that affect saturation point
The number of interviews needed to reach saturation varies based on several factors:
1. Homogeneity vs. heterogeneity of perspectives
Low saturation point (3-8 interviews):
- Well-known, specific topic where stakeholders share similar perspectives
- Example: Interviewing software developers at one company about their code review process
- Example: Asking hospital administrators about their budget approval procedures
High saturation point (30-50+ interviews):
- Widely different perspectives where each stakeholder brings unique angles
- Example: Due diligence on a company (suppliers, competitors, customers, former employees, regulators, analysts all bring fresh perspectives)
- Example: Understanding "work-life balance" as a concept across different industries, cultures, life stages, and career levels
Cross-cultural research particularly demands larger samples. Research by Hagaman and Wutich found that while 16 or fewer interviews sufficed for themes within homogeneous single sites, identifying meta-themes across cultures required 20-40 interviews (Hagaman, A.K., & Wutich, A., 2017: How Many Interviews Are Enough to Identify Metathemes in Multisited and Cross-cultural Research? Another Perspective on Guest, Bunce, and Johnson's (2006) Landmark Study. Field Methods)
2. Sub-segment analysis requirements
If you need to analyse and compare findings by multiple sub-segments (e.g., by region, by seniority level, by customer type), you need sufficient interviews within each segment to reach saturation.
Example: If you want to compare customer perspectives across three market segments, and each segment requires 8 interviews to reach saturation, you need roughly 24 interviews total—not 8. This piece of common sense is unfortunately often forgotten in e.g., consulting reports, where even single interviews might be used to draw big conclusions on a topic, e.g., "European customers seem to hold the company in higher regard than US based ones" (when n=1 European + 1 US customer...).
3. How well understood is the topic?
Well-understood topics (lower n required):
- Clear theoretical frameworks exist
- Prior research provides structure
- Stakeholders can articulate their views clearly
- Example: "What are the main challenges in your procurement process?"
Poorly understood or emergent topics (higher n required):
- Novel phenomena with limited prior research
- Implicit knowledge that's hard to articulate
- Rapidly evolving situations where perspectives are still forming
- Example: "How is generative AI changing knowledge work?"
4. Additive vs. convergent answers
Some research questions have convergent answers where insights cluster around common themes:
- "What are the main reasons customers churn?"
- "What challenges do new employees face?"
- Saturation happens when you've identified the main themes
Other questions have additive answers where each interview can contribute unique items:
- "What are all the possible revenue growth opportunities for this business?"
- "What are creative solutions to reduce carbon emissions in manufacturing?"
- More interviews = more ideas, so saturation is less clear
For additive questions, you might continue interviewing longer, but be aware you're operating under different logic than typical thematic saturation. In some cases additive questions might benefit from mixed methods research where you do high quality interviews as long as you get genuinely important answers (maybe ranking your interviewees with some likelihood of them having great insights) and then switch to opt-in sharing of additional perspectives from the wider population.
The 10+3 stopping rule: a practical protocol
For researchers seeking a defensible stopping criterion, Francis and colleagues proposed the practical "10+3 rule": conduct an initial analysis of 10 interviews, then continue until 3 consecutive interviews yield no new themes (Francis, J.J., Johnston, M., Robertson, C., Glidewell, L., Entwistle, V., Eccles, M.P., & Grimshaw, J.M. (2010). What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychology & Health). This provides an evidence-based protocol that balances rigour with pragmatic project management.
Evidence-based sample size recommendations by research methodology
We did a quick scan of publications recommending specific sample sizes by method and collected the results below in case you are looking for citations or an entry point to the methodology-specific literature
| Research Methodology | Typical Range | Example source to look at |
|---|---|---|
| Interpretative Phenomenological Analysis (IPA) | 3-10 participants | Smith, J.A., Flowers, P., & Larkin, M. (2022). Interpretative Phenomenological Analysis: Theory, Method and Research (2nd ed.). Sage Publications. |
| Phenomenological research | 5-25 participants | Creswell, J.W., & Poth, C.N. (2018). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (4th ed.). Sage Publications. |
| Grounded theory | 20-30+ participants | Creswell, J.W. (2013). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (3rd ed.). Sage Publications; Charmaz (2014) |
| Thematic analysis (small project) | 6-10 participants | Braun, V., & Clarke, V. (2013). Successful Qualitative Research: A Practical Guide for Beginners. Sage Publications. |
| Thematic analysis (saturation-based) | 12-15 interviews | Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews Are Enough? An Experiment with Data Saturation and Variability. Field Methods, 18(1), 59-82. |
| Ethnographic research | 20-50 participants | Morse, J.M. (1994). Designing funded qualitative research. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 220-235). Sage Publications; Wutich, A., Beresford, M., & Bernard, H.R. (2024). How many interviews or focus groups are enough? Emerging guidelines for sample sizes in qualitative research. International Journal of Qualitative Methods, 23. |
| Code saturation (any method) | 9-12 interviews | Hennink, M.M., Kaiser, B.N., & Marconi, V.C. (2017). Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qualitative Health Research, 27(4), 591-608; Hennink, M., & Kaiser, B.N. (2022). Sample sizes for saturation in qualitative research: A systematic review of empirical tests. Social Science & Medicine, 292, 114523. |
| Meaning saturation | 16-24 interviews | Hennink, M.M., Kaiser, B.N., & Marconi, V.C. (2017). Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qualitative Health Research, 27(4), 591-608. |
| Cross-cultural meta-themes | 20-40 interviews | Hagaman, A.K., & Wutich, A. (2017). How Many Interviews Are Enough to Identify Metathemes in Multisited and Cross-cultural Research? Another Perspective on Guest, Bunce, and Johnson's (2006) Landmark Study. Field Methods, 29(1), 23-41. |
These ranges provide benchmarks, but study designs create important variations and the choice of publication outlet influence sample sizes. Make sure to read the method papers in detail to ensure they are applicable to your specific research setting, and also check if the field has evolved since these guides were published.
Also please note that some academic research designs require repeat interviews over time! If you want to explore a longitudinal phenomenon like coping with change or joining a new organization, there is often the expectation that at least a dozen informants are interviewed more than once to create credible evidence on change dynamics.
Practical examples: How many interviews in different scenarios
Let's look at concrete examples to illustrate these principles:
Example 1: Academic grounded theory study on remote work practices
- Context: PhD researcher studying how knowledge workers adapted to remote work post-pandemic
- Appropriate n: 30 interviews
- Why:
- Topic spans different industries and roles (heterogeneous perspectives)
- Segment analysis by industry and career stage could yield valuable insights
- Target journals expect robust samples
- Emergent topic with evolving practices
- PhD student has (or is expected to have...) time for thorough analysis
Example 2: Consulting project on sales process improvement
- Context: Consultant hired to identify bottlenecks in B2B sales process for mid-sized software company
- Appropriate n: 5-10 interviews
- Why:
- Homogeneous, small population (extented sales team at one company)
- Well-understood topic (sales processes have clear frameworks, steps and associated language)
- Time pressure (client wants recommendations in 2 weeks)
- Interviews serve dual purpose: insights + buy-in from sales team
- Smaller n allows deeper, more thoughtful interviews which also help building commitment to the solution later
Example 3: Market research on customer needs for new product category
- Context: Market research firm exploring customer needs for proposed smart home device
- Appropriate n: 25-35 interviews across 3-4 customer segments
- Why:
- Segment analysis required (early adopters vs. mainstream, renters vs. owners, etc.)
- Some heterogeneity in needs and use cases
- Additive element (discovering diverse use cases)
- Need sufficient n for credibility with client stakeholders - there might be established beliefs that need to be challenged. See for example this blog post by Noren's Jaakko Luomaranta on sensemaking and sensebreaking.
Example 4: Due diligence interviews on acquisition target
- Context: Private equity firm assessing a manufacturer in a commercial due diligence (CDD)
- Appropriate n: 15-25 interviews across diverse stakeholders
- Why:
- High heterogeneity (suppliers, competitors, customers, former employees, regulators bring completely different perspectives)
- Each stakeholder type needs 2-4 interviews to validate patterns
- Time pressure (investment decision timeline)
- Quality matters enormously (expensive decision)
Example 5: Policy consultation feedback analysis
- Context: Government ministry analysing public consultation responses on new legislation
- Appropriate n: Not interviews, but analysing 500+ written submissions
- Why:
- Not choosing n or just doing sampling, you must analyse all submissions received
- Dual purpose: insights + ensuring all voices heard (democratic legitimacy)
- High heterogeneity across stakeholder groups
- See example: EU Digital Omnibus consultation analysis
Example 6: Understanding employee experience in specific department
- Context: HR team investigating retention issues in engineering department (50 people)
- Appropriate n: 10-15 interviews
- Why:
- Moderately homogeneous population (same department, same company)
- But segment analysis might be needed (tenure, team, role level)
- Dual purpose: insights + making people feel heard (also consider e.g., open invitation to participate to everyone)
- Representative sample important for credibility
- Topic fairly well-understood (employee satisfaction frameworks exist)
As we can see from the examples, qualitative analysis tends to be require considerable time investment to be valuable.
How AI-assisted thematic analysis tools like Skimle change the equation: making larger samples feasible and more insightful
Traditionally, one of the biggest constraints on qualitative research sample size has been analysis time. A common rule of thumb is that coding and analysis takes 1-2x more time than the interviews themselves.
If interviews average 60 minutes and analysis averages 90 minutes per interview, conducting 40 interviews means 100 hours of work (40 hours interviewing + 60 hours analysis). For many researchers and consultants, this is simply prohibitive, leading to either skipping qualitative analysis completely, or doing a "80/20 approach" which in reality might be just a few rushed interviews, hip shooting some recommendations that worked elsewhere... and hoping the findings are not challenged by anyone.
Skimle fundamentally changes this equation in two important ways:
1. Dramatically reduced analysis time enables larger samples
With Skimle's AI-assisted analysis workflow:
- Upload interview transcripts in any format
- Skimle systematically analyses each document, extracting insights and building category structures
- Initial analysis that previously took hours per interview now takes minutes
- You spend time reviewing and refining AI-generated themes rather than manually coding from scratch
Practical impact: If analysis time drops from 90 minutes to 15-20 minutes per interview (reviewing and refining rather than building from scratch), suddenly conducting 40 interviews becomes feasible where previously only 25 was realistic.
This means researchers can:
- Achieve higher confidence in thematic saturation
- Conduct more robust segment analysis
- Include more diverse perspectives
- Meet rising academic standards for sample size
- Still deliver projects within budget and timeline constraints
2. Real-time insights guide subsequent interviews
Traditional analysis often happens after all interviews are complete. You might identify important themes or gaps only after conducting 20 interviews, which is then too late to explore them in remaining conversations.
Skimle enables iterative insight development:
- Analyse first batch of 5 interviews immediately
- Identify emerging themes and gaps in understanding
- Adjust interview guide for next batch to explore gaps deeper
- Continuously refine your understanding as you go
- Get actual indications of thematic saturation as you go (category structures remain unchanged; summaries do not meaninfully change) to e.g., show how you applied the 10+3 rule.
Practical impact: This makes qualitative research more efficient and higher quality simultaneously. You're not flying blind until analysis begins, instead you're building insight throughout the research process.
For example, in a due diligence project:
- Interviews 1 to 5 with customers all touch quality topics in passing even though that's not the focus area of your analysis
- Quick analysis with Skimle identifies this pattern and shows the emerging category
- Interviews 6-10 can be targeted to also cover stakeholders with quality perspectives, interview guide updated (to dig deeper to nuances, or to include quantitative estimate question like "please rate each supplier from 1 to 10 in terms of quality")
- Final interviews with company management can address concerns discovered earlier
This iterative approach is what experienced qualitative researchers often do naturally through careful note-taking and reflection. Skimle makes it systematic, transparent, and scalable.
3. The Jevons paradox: efficiency improvements increase usage
Interestingly, making qualitative analysis more efficient doesn't reduce demand for qualitative researchers - it increases it.
The Jevons paradox, originally observed in energy economics, notes that technological improvements that increase efficiency of resource use tend to increase (rather than decrease) total consumption of that resource.
Applied to qualitative research:
- Before Skimle: Qualitative analysis is so time-intensive that many projects don't get done, or settle for smaller samples or superficial analysis
- After Skimle: Qualitative analysis becomes feasible for more projects, with larger samples and deeper exploration
- Result: More demand for qualitative researchers' expertise, not less
The bottleneck shifts from mechanical coding work to the higher-value activities that require human expertise:
- Designing research questions
- Conducting excellent interviews that go beyond superficial answers
- Interpreting themes and deeper analysis of interviews
- Synthesising insights into recommendations
- Communicating findings to stakeholders
This is excellent news for qualitative researchers: tools like Skimle eliminate the tedious work while creating more opportunities to apply expertise where it truly matters. Our aspiration with Skimle is to create a tool that is not about producing "90% quality AI slop" faster, but instead enabling experts to produce "200% quality" outputs faster. We believe quality is a differentiator in the era of AI.
Practical recommendations: determining your ideal n
Based on all these considerations, here's a practical framework for determining your sample size:
Step 1: Start with your constraints
- How much time do you have?
- What's your budget?
- How many hours can you realistically dedicate to interviews and analysis?
Step 2: Determine your research context
- Academic? Look at target journal standards (likely 25-50)
- Business? Consider time pressure and multiple purposes (likely 15-30)
- Policy/consultation? May be determined by who responds (analyse all)
Step 3: Assess your saturation complexity
- Homogeneous + well-understood topic: Low end of range (as few as 3-8 for very focused studies)
- Moderate heterogeneity: Middle of range (15-25)
- High heterogeneity + segment analysis + less understood: High end of range (30-50+)
Step 4: Plan for quality over quantity
- Can you realistically conduct excellent interviews at your target n?
- If not, reduce n and improve quality
- Better to conduct 20 excellent interviews than 40 mediocre ones
Step 5: Use tools to expand what's feasible
- Skimle reduces interview analysis time by 70-80%
- This expands feasible n significantly
- Enables real-time insight development to guide later interviews
- Makes larger samples achievable within the same time and budget constraints
Step 6: Plan for iterative saturation assessment
- Don't commit to final n upfront
- Plan in batches: "We'll start with 10, assess saturation, then decide if we need 3 more"
- Use analysis of early interviews to guide whether to continue using e.g., Skimle's automatic insight category creation
Conclusion: it's about insight quality, not arbitrary numbers
The right number of interviews for qualitative research isn't a magic number you find from some guide only. It's the carefully and thoughtfully selected number that allows you to:
- Reach thematic saturation for your specific research questions
- Maintain high interview quality throughout
- Complete analysis within your resource constraints
- Meet methodological standards for your context (academic, business, policy) and defend your choice
- Serve multiple purposes where applicable (insights + commitment + relationship building)
The golden rule: optimise for depth and insight quality, not sheer n.
Modern tools like Skimle are changing the economics of qualitative research by dramatically reducing analysis time and enabling real-time insight development. This makes it feasible to conduct larger, more robust studies within existing time and budget constraints—expanding what's possible while maintaining the depth and rigour that makes qualitative research valuable.
Whether you're conducting 8 interviews or 80, the key is to enter your research with clear questions, conduct thoughtful interviews, analyse systematically, and remain focused on generating genuine insights rather than hitting arbitrary sample size targets.
Ready dig deeper to your qualitative data? Try Skimle for free and see how AI-assisted analysis can help you handle larger samples while extracting deeper insights in a fraction of the time.
Want to learn more about qualitative research methods? Check out our guides on how to conduct effective business interviews, thematic analysis methodology, and why RAG-based approaches don't work for serious qualitative analysis.
About the Authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published more than a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organization Science, and Strategic Management Journal. His research focuses on organizational strategy, innovation, and qualitative methodology. Google Scholar Profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond.
