Conducting interviews is a common task for almost all knowledge workers. Consultants and market researchers need to interview people to build a picture of the situation, academic researchers in qualitative fields collect information often through interviewing subjects, and analysts, CX & UX researchers etc. use interviews every week to dig deeper.
We have earlier shared some tips on conducting great interviews and how to analyse interview transcripts. This time we go deeper into one particular tool: the interview guide. The difference between insightful interviews that uncover valuable perspectives and superficial conversations that waste everyone's time often comes down to how thoughtfully you prepared in advance.
But what makes a good interview guide? How do you structure questions to encourage depth whilst covering necessary ground? How do you balance flexibility with focus? And how do you improve your guide as you learn from early interviews?
This guide provides practical, field-tested advice for preparing interview questions that generate rich insights across academic research, business contexts, and policy work. We put it together based on our 18 years of consulting experience (Olli) and 18 years of academic experience (Henri). You can also get some further tips and tricks on the same topic from Skimlecast 3: Best practices in conducting interviews where Henri and Olli discuss this issue.
Why interview guides matter
Let's start with a counterintuitive truth: a bad interview guide is worse than no guide at all.
A poor guide tempts you to become a robot, mechanically reading questions regardless of what the interviewee says. You miss opportunities to explore interesting tangents. You ask irrelevant questions because "they're on the list." You fail to adapt when the conversation reveals unexpected insights.
A good interview guide, by contrast, is a flexible framework that:
- Ensures coverage: You don't forget to explore important topics under time pressure
- Enables depth: You have follow-up questions ready when you hit interesting territory
- Builds confidence: Both you and the interviewee know there's structure behind the conversation - "method to the madness"
- Facilitates iteration: You can refine questions as you learn from each interview
- Creates consistency: Different interviewers can cover similar ground (important for team research)
- Supports analysis: Questions aligned to your research framework make coding easier
The guide is a tool for you, not a script you perform. As we discussed in our advice on conducting effective interviews, the best interviewers "release their agenda" and follow where the interviewee leads, whilst using the guide to ensure nothing critical gets missed.
Tip 1: Start with the end in mind - design backwards from your deliverable
The single most important step in creating an interview guide happens before you write any questions: defining what you're trying to produce.
No matter if you are producing an academic paper, consulting report or market research summary, you likely have an idea of what type of outputs are needed. You know the headers and subheaders, and have some guesses on what types of findings would be most relevant. Maybe there are specific quantitative analyses you also know you want to perform and need source data (e.g., market growth estimations) for them as well.
Map your deliverable structure first, then design questions to populate it.
Example: Due diligence on acquisition target
Your report storyline might be:
- Executive summary
- Market size, overview and growth
- Target position and competitive dynamics
- Operational capabilities and quality of target
- Financial performance and projections
- Key risks and opportunities resulting to low, base and high case scenarios
- Valuation implications
When interviewing suppliers, you'd focus on sections 2 and 6. When interviewing competitors, you'd emphasise sections 2 and 3. When interviewing customers, you'd gather evidence for sections 2, 3, and 4. Each stakeholder type gets a guide tailored to what they can inform, and in totality you should check your interview guides cover all the aspects of the analysis.
Example: Customer research for product development
Your deliverable might be:
- Current workflow and pain points
- Workarounds and alternative solutions tried
- Feature priorities and use cases
- Willingness to pay and procurement process
- Implementation and adoption considerations
Your guide would follow this workflow chronologically, letting customers narrate their experience whilst you probe for evidence in each area. With a systematic guide, you know you are covering the full gamut of interesting topics, not just zooming into one area.
The principle: Don't write questions in a vacuum. Write questions that will give you the material you need to write your deliverable.
Tip 2: Structure your guide with a logical flow
Once you know what ground you need to cover, organise it into a structure that feels natural to the interviewee. The goal is to create a conversation that flows logically, not a random collection of questions.
Common structural approaches:
Chronological flow: Walk through time
- "Tell me about how you first encountered this problem"
- "What happened next?"
- "How has your approach evolved over time?"
- "Looking forward, what do you expect will change?"
Best for: Customer journeys, organisational change processes, career development, adoption stories
Process or lifecycle flow: Follow steps in a process
- "How do you first become aware of potential suppliers?"
- "What's your evaluation process?"
- "How do you make the final decision?"
- "What happens during implementation?"
- "How do you handle ongoing management?"
Best for: Business process research, user experience studies, procurement decisions, any sequential activity
Existing frameworks by topic:
- Strategy, Structure, Processes, People, Technology (operating model framework I authored while at McKinsey)
- Awareness, Consideration, Decision, Retention (marketing funnel)
- Political, Economic, Social, Technological (PEST framework)
Best for: Comprehensive organisational analysis, strategic assessments, systematic evaluation
The benefit of considering frameworks like this is that you can ensure you have an answer that is "MECE" (Mutually Exclusive, Collectively Exhaustive) meaning that it covers the area systematically. The risk is that you pick something that is almost but not quite applicable, in which case you risk omitting important areas or asking confusing questions. Do use your judgement and have a bias for a fit-for-purpose framework rather than a cookie-cutter one!
Funnel approach: Broad to narrow
- Industry trends and context (broad)
- Company-specific situation (narrower)
- Specific initiative or product (narrow)
- Personal experience with detail (narrowest)
Best for: Expert interviews where you want both market perspective and specific insights
The opening move: Whatever structure you choose, explain it at the start. "I'd like to walk through your customer journey chronologically, starting with how you first discovered you needed this type of solution. Does that work for you?" This signals structure whilst giving them chance to suggest alternative approaches.
Tip 3: Mix question types strategically
Not all questions serve the same purpose. A well-crafted guide deliberately mixes different question types depending on what you're trying to accomplish.
Open exploratory questions
Purpose: Let interviewees identify what matters to them, discover unexpected insights
Structure:
- "What challenges do you face with [topic]?"
- "How do you think about [concept]?"
- "What's been your experience with [situation]?"
When to use: Early in sections, when exploring new territory, when you don't want to bias responses
You can make these types of questions more effective by making them concrete. Rather than "What is your role?", try "Tell me about your role - what does a typical week look like for you?" The second version invites storytelling rather than list-making.
Example: "What are the main challenges you face in your procurement process?" lets them define what "main" means from their perspective.
Mildly leading questions
Purpose: Test whether something you've heard elsewhere resonates, validate emerging patterns
Structure:
- "Some people have mentioned that [X] is problematic. What's your view on that?"
- "We've heard that [Y] is becoming more common. Do you see that as well?"
- "There's been discussion about [Z]. How does that fit with your experience?"
When to use: Mid-interview once rapport is established, when you have hypotheses to test, when synthesising across interviews
Example: "Several suppliers have commented that quality requirements have tightened significantly in the past two years. What's your perspective on that?"
Strongly leading questions
Purpose: Get direct reaction to a specific hypothesis, test consensus, challenge their thinking
Structure:
- "Would you agree that [strong statement]?"
- "Some would argue that [provocative claim]. Is that fair?"
- "Isn't it true that [direct assertion]?"
When to use: When you need to challenge, when testing controversial hypotheses, when interviewee is being vague and you need concrete reactions
Example: "Would you agree that the current onboarding process is fundamentally broken and creates more problems than it solves?"
Important: Strong leading questions can shut down exploration if used too early. Save them for when you've established rapport and want to test specific claims, or when dealing with experienced executives who respond well to direct challenges. In academic research, leading questions are used sparingly as they can threaten the validity of the research. You need to be particularly careful if you quote answers to leading questions as your informant may simply be trying to be polite and agreeable. Ideally, leading questions can be used to verify what other informants have already said or to provoke counter-arguments and alternative perspectives from the interviewees.
Quantitative questions
Purpose: Get a sense of how big/important something is, or how things compare to each other
Example questions:
- "From 1 to 10, how would you rate the perceived quality of product X, product Y and product Z"
- "If you can to allocate 100 points between these goals in terms of importance, how would they split?"
- "If the best possible customer experience you've ever had was a 10 and worst a 1, where would you place ours?"
- "You mention this is common. Looking at the last month, what is the share of instances when it happens? 10%, 40%, 80%, ...?"
- "What is your estimate of the market growth in terms of percents per year?"
When to use: When you want to get a sense of how big something really is and produce input for quantitative analysis. While at McKinsey, I (Olli) used a lot of simple Likert scale questions ("From 1 to 5, how would you rate the statement 'I can trust the other members of the top team'") in my interviews. They serve the dual purpose of producing quantitative data (many executives, especially with engineering backgrounds, find it easier to talk about trust being rated at 2.3 / 5.0 than "trust was a bit low") to yield credibility to the findings, as wel as allowing then very targeted follow-up questions ("hey, you rated trust at 2... tell me more on why is that").
Important: Qualitative analysis is not the same as quantitative analysis. Be careful not to overgeneralize from a few individual figures mentioned in an interview. Focus on a few metrics that matter and use multiple interviews to triangulate them. Don't be afraid to also dig deeper and ask for the rationale for the number to get more color. Quantitative estimates are not comparable across interviewees, but they can be thought as a special case of relative comparisons that are often useful.
Synthesis and reflection questions
Purpose: Show you're listening, clarify understanding, give them chance to correct misinterpretations
Examples:
- "So if I'm understanding correctly, you're saying [summary]. Is that right?"
- "It sounds like the core issue is [interpretation]. Does that capture it?"
- "Let me make sure I've got this - [paraphrase key points]"
- "What you are saying would also imply [synthesis and implications], right?"
When to use: Throughout the interview as active listening, at transition points between sections, before moving to next topic
Mark these in your guide: Include prompts like "[SYNTHESIS: Summarise what they've said about X before moving to Y]" so you remember to use this technique rather than just reading the next question robotically.
Deep dive and probe questions
Purpose: Go deeper when you've struck gold, get concrete examples, understand nuances
Structure:
- "Can you give me a specific example of that?"
- "Can you think of a time when [specific situation] happened?"
- "What happened then? Walk me through it."
- "Why do you think that occurred?"
- "How did you feel about that?"
- "What would have happened if [counterfactual]?"
- "Tell me more!" - a great open ended deep dive question allowing the interviewed person to pick the direction where to go deeper
When to use: Whenever something interesting emerges, when answers are vague or general, when you need evidence not assertions
Have these ready: Don't rely on improvising probes. Build a standard set into your guide that you can deploy whenever needed as a reminder for you to deploy in case it seems that the main question hit gold and you want to dig deeper to the gold vein. As we discussed in our interview tips from McKinsey experience, knowing when to dig deeper on interesting responses is one of the most valuable interview skills. Having probe questions prepared means you won't miss the opportunity.
In academic research: Some informants find it difficult to talk about social phenomena you are studying. In these cases, it is often useful to ask them to tell a story about specific events that have taken place.
Tip 4: Research your interviewee and tailor the guide
A generic interview guide treats all interviewees the same. An excellent guide is tailored to who you're speaking with and what they specifically can contribute.
Before each interview, invest 15-30 minutes researching:
What do they know?
Their role and responsibilities: A CFO can speak authoritatively about financial performance and capital allocation, but probably shouldn't be your source on detailed technical specifications. A frontline customer service representative knows pain points customers actually experience, but may not have visibility into company strategy.
Their tenure and experience: Someone who joined two months ago offers fresh perspective on onboarding and initial impressions. Someone with 15 years in the role provides historical context and pattern recognition across situations.
Their expertise domain: In due diligence interviews, a quality inspector brings completely different insights than a logistics coordinator or a competitor CEO. Design your questions to access their unique knowledge.
What have they said already?
Public statements: If you're interviewing an executive who has given conference talks or written articles, reference them. "I read your piece on supply chain resilience in the Financial Times. I'm curious how you've applied those principles in your own organisation."
Internal communications: If interviewing employees about organisational change, read the CEO's announcement and town hall transcripts first. You can then ask: "The CEO emphasised three priorities in the launch. Which of those resonates most with your experience?"
Previous research: If this is interview 15 in your series, you know what patterns have emerged. "We've heard from several other departments that the approval process creates bottlenecks. Does that match your experience in finance?"
Why this matters: Showing you've done your homework demonstrates respect, builds rapport, and lets you skip basic questions to focus on deeper territory. It also helps you avoid asking things they can't answer. In each interview you only have e.g., 60 minutes of time to find gold - do not waste it in random questions or topics you could find from other sources (e.g., "what is your company's revenue") but do some prospecting beforehand to have ideas on where to dig.
Tip 5: Open broad, then go deep and sensitive
The order of questions matters enormously. A poorly sequenced guide asks sensitive questions before trust is established, or stays superficial when deeper exploration is possible.
The funnel principle: Start broad and safe, gradually move toward specific and sensitive.
Opening questions (first 15-20% of interview)
Purpose: Establish rapport, get them talking, build their confidence that this will be a good conversation
Characteristics:
- Open-ended and easy to answer
- Related to their expertise and experience
- Non-threatening and non-controversial
- Often factual or descriptive
Examples:
- "Tell me about your role and what you focus on day-to-day"
- "How did you get into this line of work?"
- "Can you give me an overview of how your department operates?"
Middle questions (main body, 60-70% of interview)
Purpose: Cover your core research questions, gather substantive material for analysis and look broadly for where you might find the best gold nuggets.
Characteristics:
- Structured around your deliverable needs
- Mix of question types as discussed above
- Increasingly specific as you explore each topic
- Include synthesis points to show you're tracking
Examples:
- "Walk me through your procurement process from end to end"
- "What are the biggest challenges you face in that process?"
- "Can you give me a specific example of when things went wrong?"
- "How did you handle that situation?"
Deeper and sensitive questions on most valuable topics (later sections, 20-30% remaining)
Purpose: Explore potentially controversial topics, get honest assessment, discuss problems
Characteristics:
- Require trust to answer honestly
- May involve criticism of colleagues, leadership, or systems
- Ask for candid assessment or controversial views
- Might touch on personal experiences or failures
Examples:
- "How would you assess the quality of leadership communication during this change?"
- "What's not working that people aren't talking about openly?"
- "If you could change one thing about how this organisation operates, what would it be?"
- "What's your level of confidence that this initiative will actually succeed?"
- "Hey, thanks for the main interview. Just one last question, a bit personal: if you could in all honesty tell me what I should really know about how this place actually works?"
Why this sequence works: By the time you reach sensitive questions, you've demonstrated competence, shown you're listening, and built enough rapport that they're willing to be candid. Asking these questions earlier often produces guarded, superficial answers. My (Olli) favourite move was copied from the old TV series Columbo, where the detective would famously end discussions with "Just one more thing": in the final moments signal transtioning away from the main interview and ask for an honest opinion on how things really are. Similar to how journalists sometimes make a point about putting away their recorder and asking something casually - this way you get the real truth.
Tip 6: Improve the guide after each interview
Your first interview guide won't be perfect. In fact, it shouldn't be. A good qualitative research process involves continuously refining your questions as you learn what works and what doesn't.
After each interview, ask yourself:
What worked?
- Which questions generated rich, detailed responses?
- Where did interviewees open up and share valuable insights?
- Which areas felt natural and conversational?
What didn't work?
- Which questions confused people or required extensive explanation?
- Where did you get superficial, generic answers?
- What did you forget to ask that you realised later was important?
What did you learn?
- What new themes or patterns emerged that you should explicitly ask about in future interviews?
- What language or terminology do people actually use (vs academic or formal terms)?
- What analogies or examples helped people understand your questions?
Common refinements:
Reword confusing questions: If three interviewees have asked you to clarify what you mean by "organisational alignment," you need simpler language. Try "How well do different teams coordinate with each other?"
Add probes for shallow areas: If everyone's answers about "challenges" are vague, add: "Can you give me a specific example of a time this challenge created problems?"
Remove questions that don't yield insights: If a question consistently produces "I don't know" or generic answers, cut it and use that time elsewhere.
Add new questions for emerging themes: After five interviews, you notice everyone mentions a particular issue unprompted. Add it to your guide: "Several people have mentioned [X]. What's your experience with that?"
Adjust ordering: If you find people naturally want to talk about topic B before topic A, reorder your guide to match their preferred flow.
The practical approach: Keep a "running notes" section at the end of your guide. After each interview, spend 5-10 minutes noting what to adjust. Every 3-5 interviews, create a new version incorporating those refinements. This iterative approach is one reason why qualitative analysis tools like Skimle that enable real-time insight development are so valuable. You can analyse your first 5-10 interviews immediately, identify emerging themes and gaps, then refine your guide for the remaining interviews based on what you're learning.
Tip 7: Mark up your guide with execution notes
Your interview guide isn't just questions. It's your roadmap for conducting the interview. A well-marked guide includes prompts and reminders to yourself about how to use it.
Useful annotations to include:
Time allocations: "[15 minutes: Context and background]" helps you pace the interview and ensures you leave enough time for important sections.
Transition points: "[TRANSITION: Thank them for background, explain we're moving to challenges they face]" reminds you to signal shifts between topics.
Synthesis prompts: "[SYNTHESIS: Summarise what you've heard about their current process before asking about improvements]" ensures you demonstrate active listening.
Probe reminders: "[If vague, probe: Ask for specific example]" tells you when to dig deeper.
Optional sections: "[If time: explore X. If not: skip to Y]" gives you flexibility when running short on time.
Sensitive question flags: "[Note: This may be sensitive. Preface with: 'I know this might be a challenging topic...']" helps you frame difficult questions carefully.
Rapport building moments: "[Personal connection: If appropriate, share your own experience with this challenge]" reminds you to stay human, not robotic.
Transitions to reset energy: Long interviews require energy management. Build in transition points: "That's really helpful context on the current situation. Now I want to shift gears and look forward - where do you see things heading in the next 2-3 years?" These resets re-engage attention.
Observation prompts: Add reminders like "[Note their body language when discussing leadership]" or "[Watch for hesitation when answering this question]." These cues often reveal as much as words.
I (Olli) sometimes add a text "rememeber to smile and nod" on top of my interview guide! This is because my natural deep listening facial expressions reportedly range from "somebody stole my lunch" to "who should be my next victim" (you can verify this yourself on the Skimlecast videos)... others might have other important notes to self they should include like "slow down & don't rush" or "don't be a robot". It's good to listen to the transcripts afterwards also from the angle of how you could have done better, not just what you learned.
Example marked-up section:
[20 minutes: Customer relationship and satisfaction] Opening: "I'd like to understand your relationship with customers. Let's start broad and then go into details." 1. How would you describe your typical customer? [If vague: probe for specific segments, industries, size] 2. What do customers value most about your offering? [Listen for: price, quality, service, reliability, relationships] [SYNTHESIS: "So it sounds like X and Y are the main value drivers. Is that right?"] 3. What complaints or concerns do customers raise most frequently? [Probe for specifics: "Can you give me a recent example?"] [If they say "no complaints": Challenge gently: "Really none at all? What about minor issues?"] 4. [More sensitive] Some suppliers have mentioned that customer expectations have become unrealistic. Do you see that? [If yes: Explore why. If no: Ask how they manage expectations] [TRANSITION: "Thanks, that's really helpful context. Now I want to understand the commercial relationship..."]
These annotations make your guide usable in the moment when you're conducting the interview and managing multiple things simultaneously.
Tip 8: Test your guide with AI before the first interview
Here's a powerful technique that most researchers don't use: simulate your interview before conducting it.
Feed an AI the persona of who you'll be interviewing and ask it to role-play as that person answering your questions. This reveals problems with your guide before you waste an actual interviewee's time.
How to do this:
1. Create a persona description
"You are a CFO at a mid-size manufacturing company. You've been in the role for 5 years. Your background is in accounting and you're quite conservative financially. You're 48 years old, detail-oriented, and sceptical of consultants. You care deeply about predictability and risk management."
2. Provide your interview guide
Include all your questions exactly as written.
3. Ask the AI to generate a 60-minute interview transcript
"Please generate a realistic 60-minute interview transcript where I ask these questions and you respond as the CFO persona described above. Make your answers realistic - sometimes vague, sometimes defensive, sometimes very detailed when the question interests you."
4. Analyse the simulated transcript
Look for:
- Questions that produced confusion or overly vague answers
- Missing follow-up questions where you should have probed deeper
- Awkward transitions between topics
- Areas where you didn't gather enough detail for your analysis needs
- Questions that felt redundant or unnecessary
5. Refine your guide and test again
Make adjustments and run another simulation with a different persona (perhaps a more forthcoming interviewee or someone more junior) to see how the guide works across contexts.
Why this can be helpful
Safe environment: You can fail without wasting anyone's time or burning relationship capital
Rapid iteration: Run 3-4 simulations in an hour, refining each time
Perspective: Reading a mock transcript helps you see where your questions don't flow naturally
Confidence: Going into your first real interview knowing the guide has been tested
Edge case preparation: Simulate difficult interviewees (defensive, rambling, monosyllabic) to prepare for challenging conversations
Of course, AI simulations aren't perfect substitutes for real humans. But they're vastly better than going in cold with an untested guide, especially for high-stakes intervies or for the first few ones where you're still learning about the topic.
In addition to AI, or instead, you can of course practice with a colleague. Have them play the interviewee role while you follow your guide. Get feedback on clarity, flow, and gaps. This is especially valuable for team research where multiple people will use the same guide.
Tip 9: Be ready to "release your agenda"
The biggest risk with having a great interview guide is that you will follow it. Great interviewers treat interview guides like jazz musicians treat notes: as a great starting point for improvisation. Spending the time preparing the perfect interview guide is time well spent as it gives you the confidence to let the discussion flow also in unexpected directions.
When to abandong the guide, at least temporarily
- Hitting real gold. If your interviewee is offering incredible insights on an unexpected topic, follow that thread even if it means skipping sections. You can always schedule a follow-up for missed areas.
- Spotting blocking concerns. Sometimes the context setting or introductions have not been enough to make the person feel comfortable and you sense they are avoiding answers. In those cases spend the time it takes to set the context and build trust instead of going through your list of questions as the answers won't be useful anyways.
- Unexpected points of view. You might have considered the person would mainly be able to answer specific questions (e.g., they are a customer of the company you are doing a due diligence on) but during the discussion you discover they also bring other valuable perspectives (e.g., they used to work for a competitor). Make sure to adjust to capture these insights. Or in other cases, skip sections of the guide if they simply lack the knowledge or context to answer them.
Also remember you are in full control of time. You can ask to go over, end interviews ahead of time, ask for further sessions etc. - do not get mentally stuck in the 60 minute slot you see in the calendar.
Tip 10: Horses for courses
Different contexts come with different priorities and considerations. While the tips above apply broadly, different research contexts emphasise different aspects of guide preparation:
Academic research
Priority: Theoretical rigour, consistency across interviews, defensible methodology
Key practices:
- Map questions explicitly to research questions and theoretical framework
- Maintain core questions consistent across all interviews for comparative analysis
- Document all iterations to your guide and rationale for changes
- Consider having multiple researchers vet the guide
- Avoid too leading questions as the validity of answers might not pass scrutiny
- Ensure you get informed consent during the interview if that is part of your plan
Business and consulting
Priority: Actionable insights, efficiency, stakeholder management
Key practices:
- Design questions to populate recommendation structure from the start
- Iterate quickly - sometimes after every 2-3 interviews in fast-moving projects
- Tailor heavily to each stakeholder type to maximise their unique contribution
- Include questions that build buy-in, not just gather information (as discussed in why interview sample sizes vary)
User research and product development
Priority: Understanding workflows, discovering pain points, identifying use cases
Key practices:
- Follow chronological, real user journeys rather than abstract frameworks
- Emphasise "show me" over "tell me" - ask for demonstrations where possible
- Prepare questions about workarounds and alternatives (reveals what's broken)
- Test feature concepts later in interview once you understand context
Policy and public consultation
Priority: Representative perspectives, democratic legitimacy, comprehensive coverage
Key practices:
- Ensure guide covers all aspects of policy implications (economic, social, implementation)
- Include questions about who else might be affected that you haven't reached
- Balance open exploration with specific questions about policy proposals
- Consider how different stakeholder groups will interpret the same questions
How Skimle enhances the iterative guide development process
One of the biggest challenges with traditional qualitative research is the lag between conducting interviews and learning from them. You complete 10 interviews, then spend two weeks coding them, only to discover you should have been asking different follow-up questions.
Skimle's AI-assisted analysis changes this dynamic by enabling real-time insight development:
- Analyse as you go: Upload transcripts after each interview and immediately see what themes are emerging
- Identify gaps quickly: Notice areas where multiple interviewees give vague answers or where you're not getting sufficient depth, then adjust your guide accordingly
- Spot unexpected patterns: Discover themes you hadn't anticipated and add questions to explore them in remaining interviews
- Track question effectiveness: See which questions consistently produce rich data versus which generate superficial responses
- Share insights across research teams: When multiple researchers are conducting interviews, everyone can see emerging patterns and refine their guides based on collective learning
This iterative approach mirrors how experienced qualitative researchers naturally work - constantly learning and refining - but makes it systematic and scalable rather than dependent on individual researcher memory and note-taking.
As we discussed in why qualitative sample sizes vary, this real-time analysis capability means you can make larger samples feasible (by reducing analysis time) whilst also improving quality (by informing later interviews with insights from earlier ones).
Conclusion: the guide as a tool for thinking, not a script to follow
The perfect interview guide doesn't exist. What exists are guides that are fit for purpose - appropriate for your research context, tailored to your interviewees, and flexible enough to accommodate the unexpected.
The best interview guides embody a paradox: they're meticulously prepared and thoughtfully structured, yet held lightly enough that you can deviate when more valuable paths emerge. They provide scaffolding for coverage whilst leaving room for exploration. They build in rigour without rigidity.
Invest time in crafting your guide. Map it to your deliverable. Structure it logically. Mix question types strategically. Research your interviewees. Build in iteration. Mark it up for execution. Test it before deploying it.
But then, when you're in the interview, remember what matters most: connecting with another human, listening genuinely to what they share, and following the threads that lead to insight. The guide is your tool for making that possible, not a constraint on where the conversation can go.
Ready to analyse your interview data more efficiently? Try Skimle for free and experience how real-time analysis helps you refine your interview guide as you learn from each conversation.
Want to learn more about qualitative research methods? Read our guides on conducting effective interviews, determining the right sample size, and thematic analysis methodology. And watch Skimlecast episode 3 for best practices in interviewing.
About the authors
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published more than a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organization Science, and Strategic Management Journal. His research focuses on organizational strategy, innovation, and qualitative methodology. Google Scholar profile
