You just lost a deal you were confident about. Your account executive spoke to the buyer afterwards, came away with some vague notes about "pricing" and "the competitor had better integrations," and shared them in the next sales standup. Everyone nodded. The conversation moved on. Three months later, you lose another deal for the same reasons, and nobody connects the dots.
This is how many B2B companies do win-loss analysis. A handful of informal conversations, a few anecdotes that circulate briefly, and then nothing changes. The problem is not a lack of conversations. It is the absence of a system that treats those conversations as data.
When win-loss analysis is done well, it is one of the highest-signal inputs available to a product, sales, or strategy team. It tells you why buyers chose you or a competitor, in their own words, with context you cannot get from CRM fields or win rates. But getting there requires treating the programme like a research project: consistent interview design, structured metadata, systematic analysis across many interviews, and a clear feedback loop into the business. This article covers how to build that.
Why ad-hoc win-loss conversations fail
The typical win-loss process goes like this. A sales leader decides the team should be talking to buyers after deals close. Reps reach out to a few contacts who are willing to chat. The conversations are informal, sometimes useful, and rarely documented beyond a few bullet points in a shared doc. The insights that do emerge are shaped heavily by whoever conducted the interview and what they were hoping to hear... in practice either price or features are blamed.
This approach produces what researchers call confirmation bias at scale. If the rep who lost a deal believes the product is priced wrong, they will hear confirmation of that belief in the debrief. If the rep who won believes they succeeded because of a strong relationship, the interview notes will reflect that. The signal is there, but it is buried under individual interpretation and inconsistent questioning.
There is also a coverage problem. Ad-hoc programmes tend to skew towards deals where the rep had a good relationship with the buyer (making the lost-deal sample unrepresentative), deals that were large enough to feel worth the effort (missing patterns in the mid-market), and recent deals (losing institutional memory about anything more than a quarter old).
The result is a set of interviews that cannot be compared to each other, cannot be aggregated, and cannot be analysed for patterns. They are individual opinions, not a dataset.
The foundations of a systematic programme
Turning win-loss interviews into a dataset starts with three things: consistent structure, complete metadata, and a sample large enough to find patterns.
Consistent interview structure
Every win-loss interview should cover the same core topics, in roughly the same order. This does not mean the conversation has to be rigid. A good interviewer will follow threads and dig into what the buyer says. But the interview guide should ensure you always cover: the decision-making process (who was involved, how long it took), the key evaluation criteria (what mattered most), the competitor set (who else was in the running), the turning point (what tipped the decision), and any friction in the process (what nearly derailed things either way). Writing a rigorous interview guide is the foundation of any analysis you will do later.
Without this structure, you cannot compare interviews. You end up with 20 conversations that each covered different topics in different depths. The interviews themselves should ideally be conducted by someone outside the direct sales relationship, whether an internal programme manager, a customer success leader, or a third-party researcher. Buyers are more candid when they are not talking to the person who just tried to sell to them.
Complete and consistent metadata
Before you can analyse patterns across interviews, you need to be able to slice the data. For win-loss analysis, the minimum useful metadata set typically includes:
- Outcome: won or lost
- Deal size: a banded range (for example, under €20k, €20k-€100k, over €100k)
- Competitor: which alternative the buyer chose or evaluated seriously
- Product line or use case: which part of your offering was in scope
- Segment or region: the buyer's industry, company size, or geography
- Stage lost at: if applicable (early evaluation, late-stage, procurement)
- Buyer persona: the title and function of the primary decision-maker
These fields let you ask questions that would be invisible from individual interviews. Are losses in the enterprise segment concentrated against one competitor? Does pricing come up more in the mid-market than among larger buyers? Does a particular product line have a win rate problem that is not visible in the aggregate numbers?
This approach, using metadata to segment and cross-tabulate qualitative findings, is covered in depth in our guide on discovering themes in the data using metadata variables. In a win-loss context, the variables are your deal attributes.
Sample size
There is no magic number for how many interviews you need before patterns become visible, but as a rough guide, you need enough interviews to fill the cells of your most important metadata cuts. If you care about four competitor comparisons and three segments, you ideally want five or more interviews in each combination before you start drawing strong conclusions.
In practice, this means running win-loss interviews continuously, not as a quarterly exercise. A company doing 50 deals a month might aim for coverage across 20-30% of closed deals (a mix of wins and losses, weighted towards losses because they are more informative). Our guide on qualitative research sample size discusses how to think about sufficiency for different kinds of conclusions.
Treating interviews as qualitative data
Once you have a body of interviews, the analytical work begins. This is where most programmes break down, even well-intentioned ones. The instinct is to read through the notes, pull a few representative quotes, and write a summary. That approach is faster than systematic analysis but it produces findings that are as much a reflection of the analyst's prior beliefs as the data itself.
Systematic analysis means going through the interviews methodically, identifying themes across all of them, and counting how often each theme appears, in which segments, and in which conditions.
Coding for win-loss analysis - the manual method
Thematic analysis is the methodological backbone of systematic qualitative research, and it applies directly to win-loss programmes. The classic manual process is:
- Read through all interviews to get a feel for the landscape before coding anything
- Develop an initial set of codes: labels for the ideas, concerns, and reasons that come up across interviews
- Apply those codes systematically across every interview
- Look at all the quotes for each code and identify what the actual pattern is
- Synthesise the patterns into findings, with evidence
For win-loss work, you would usually develop two kinds of codes. Descriptive codes capture what the buyer said (for example: "cited integration capability," "mentioned pricing," "referred to competitor's support quality"). Evaluative codes capture what it meant (for example: "this was a deciding factor," "this was a concern but not a blocker"). Both matter, because a theme that comes up often is not the same as a theme that drove decisions. A practical tip: keep your codes granular during the coding phase and consolidate later. "Pricing" is not a useful code on its own. "Pricing: absolute level," "Pricing: comparison to competitor," "Pricing: value clarity," and "Pricing: internal budget politics" are much more useful, because they lead to different implications. You can always merge them if they turn out to be the same thing in practice.
Analysis by metadata slice
Once all interviews are coded, the metadata becomes your analytical lens. Pull together all the quotes coded as "cited integration capability" and look at them by segment. Is this concern concentrated among technical buyers? Among a particular competitor set? Among deals above a certain size? A theme that applies uniformly to all buyers calls for a different response than one that is specific to a segment.
This cross-tabulation is where win-loss analysis shifts from interesting to actionable. A product team cannot do much with "buyers care about integrations." They can do something with "buyers evaluating us against Competitor X who are mid-market SaaS companies consistently say our integration library is too narrow, particularly for HubSpot and Salesforce, and this came up in 14 of 18 deals in that segment."
The challenge of scale
Doing this analysis by hand across 40 or 60 interviews is genuinely hard. Keeping track of codes, reviewing all the passages for each code, and spotting patterns across metadata dimensions is exactly the kind of work that gets skipped when it should not be. Companies would often omit the analysis, rely on gut feelings, outsource the analysis to junior workers or external contractors, or try to hack some simple word counts or pivot tables to get surface level insights.
This is where modern qualitative analysis tools tapping AI change the game.
Tools like Skimle are designed for this kind of structured analysis across a body of interviews. You upload the transcripts, automatically apply metadata, and the tool helps you develop and apply codes across the full dataset, then view all the evidence for each theme organised by your metadata variables. What would otherwise take a week of manual work becomes something you can do iteratively as new interviews come in. You can read more about how Skimle approaches qualitative analysis or see how it applies to market research use cases.
Conducting interviews that produce useful data
The quality of the analysis depends entirely on the quality of the interviews. A few principles are worth emphasising for the win-loss context specifically.
Interview the actual decision-maker. Not the champion who liked you. Not the procurement contact who processed the contract. The person who made the call, or the small group who made it together. This is often harder to arrange, especially on losses, but the conversations are dramatically more informative.
Wait a few weeks before reaching out. Buyers who have just finished a procurement process are often exhausted and sometimes guarded. Waiting 3-4 weeks gives the decision time to settle and makes the conversation more reflective.
Ask about the process before asking about the decision. Starting with "can you walk me through how the decision came together?" gives you far richer context than starting with "why did you choose the other vendor?" By the time you get to the decision itself, you understand the politics, the timeline, and the competing priorities.
Probe for specifics. Buyers will often give you a polished answer first, especially on losses. "Your pricing was a bit high" is usually a partial explanation. Follow up with "can you say more about that? Was it the absolute level, how it compared to alternatives, or something about how we presented it?" Our guide on conducting effective business interviews goes into detail on how to ask questions that get past the first-level response.
Record and transcribe if at all possible. Notes taken during an interview are filtered through the interviewer's attention and interpretation. A transcript gives you the actual words, which matters when you are coding for nuance. Setting up audio recording and transcription is easier and cheaper than most teams assume.
Building the feedback loop
Analysis that sits in a document is not analysis, it is archaeology. The point of win-loss research is to change something: how the product is positioned, what capabilities get prioritised, how competitive deals are handled, or where pricing needs to shift. Building that feedback loop requires a few deliberate choices.
Route findings to the right audiences. Product teams need to hear about capability gaps, with evidence. Sales teams need to hear about competitive objections and how buyers respond to different positioning. Marketing needs to hear how buyers describe the problem and what language resonates. The same interviews contain all of this, but the summaries need to be packaged differently for each audience.
Create a regular cadence. A quarterly win-loss review, where new themes from the past quarter are presented alongside trend comparisons to previous quarters, keeps the programme visible and actionable. One-off reports tend to get read once and forgotten.
Track whether things change. If integrations are consistently cited as a reason for losses in Q1 and the product team ships new integrations in Q2, the Q3 win-loss data should show whether that gap is still coming up. This kind of longitudinal comparison is only possible if the analysis is systematic and the metadata is consistent. It is also one of the more satisfying things a programme can produce: concrete evidence that the business responded to what buyers said, and that it made a difference.
Connect to quantitative signals. Win-loss interviews tell you the why. Your CRM tells you the what and the scale. Combining them is where you get the most actionable picture. If integrations come up in 60% of your interviews with a particular competitor but that competitor only accounts for 15% of your losses by volume, it is a real issue but not the biggest one. Conversely, if a theme that sounds minor in interviews corresponds to a high-loss-rate segment in your CRM data, it deserves more attention than it would get from qualitative analysis alone.
A note on using AI for win-loss analysis
There is growing interest in using AI tools to analyse win-loss interviews, and the question is worth addressing directly. Uploading transcripts to a general-purpose AI chat tool and asking "what are the main themes?" is not a substitute for the structured approach described here. The output will be plausible-sounding but not rigorous: the AI has no visibility into what it missed, no way to show you the evidence behind each claim, and no ability to compare themes across metadata dimensions in a transparent way. Our article on whether ChatGPT can analyse qualitative data covers these challenges in some detail.
The most useful AI-assisted approaches combine structured data (consistent metadata, explicit codebooks) with AI support for the tagging and retrieval work, while keeping the synthesis and interpretation in human hands. This is the design philosophy behind Skimle's approach to interview analysis, which maintains full transparency from every insight back to the source quotes rather than generating summaries that obscure the evidence.
If your team is running win-loss conversations but not getting systematic insight from them, the methodology is available. The barrier is usually the tooling and the discipline to treat the interviews as structured data rather than anecdotes.
Ready to bring structure to your win-loss analysis? Try Skimle for free and see how systematic qualitative analysis works across a real body of interviews.
Want to go deeper on the methodology? Read our guides on thematic analysis, how to analyse interview transcripts, and using metadata to discover patterns in qualitative data.
About the authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
Frequently asked questions
How do I get buyers to agree to a win-loss interview, especially after a loss?
Timing and framing make the biggest difference. Wait 3-4 weeks after the decision closes before reaching out. Frame the request as an input to your product roadmap, not a sales conversation: "We are trying to understand what drives decisions like yours so we can build a better product. There is nothing to sell. We would genuinely value 20 minutes of your perspective." Many buyers will say yes to this, including on losses, because it costs them little and they often have opinions they are happy to share now the pressure is off. Having someone other than the sales rep make the request also helps considerably.
How many win-loss interviews do I need before I can start drawing conclusions?
You can start spotting tentative patterns at around 10-15 interviews if they cover a reasonably homogeneous segment. For broader conclusions across segments, competitors, or deal sizes, you typically need 30-50 interviews to have enough coverage in each meaningful slice. The key principle is that conclusions should be proportional to the evidence: it is fine to say "this theme came up in 4 of 6 interviews with mid-market buyers who were also evaluating Competitor X" as a hypothesis, while being clear that it needs more interviews to confirm. Our guide on qualitative research sample size covers how to think about sufficiency more systematically.
How do I structure the analysis so findings are credible to sceptical stakeholders?
The two things that most undermine credibility are cherry-picked quotes and vague frequency claims. To counter these, be explicit about your sample (how many interviews, across what mix of outcomes and segments), show frequency counts for each theme (not just "several people mentioned" but "12 of 24 interviews"), and include the disconfirming evidence (what did the interviews that did not fit the pattern say?). Linking every claim back to specific quotes, as you can do in tools like Skimle, also helps: it invites scrutiny rather than deflecting it, and stakeholders who can read the source material themselves are more likely to trust the analysis.
How do I route win-loss findings to product, sales, and marketing without everyone getting a 20-page report?
Build three short-form outputs from each quarterly analysis cycle. For product, a one-page summary of capability gaps with frequency data and representative quotes, ranked by impact on loss rate. For sales, a competitive playbook update that covers the objections that came up and the language buyers used to describe what they valued. For marketing, a set of verbatim phrases that describe the problem the product solves, drawn from how buyers described their situation before the evaluation. The underlying analysis is the same, but each audience gets the slice that is relevant to their decisions.
How do I prevent win-loss analysis from becoming a blame exercise for the sales team?
The framing matters from the start. Win-loss analysis should be positioned as a market intelligence function, not a performance review of individual reps. Findings should be reported at the aggregate level, not tied to specific deals or salespeople. It also helps to surface wins alongside losses: understanding why you won is as strategically useful as understanding why you lost, and it makes the programme feel less adversarial. When sales teams see that the findings lead to product changes, messaging updates, and competitive support, rather than just attribution for losses, they become contributors rather than subjects.
How do I collect win-loss interview data at scale?
If you want to collect dozens of pieces of feedback but lack the capacity to conduct each interview personally, you should take a look at Skimle Ask. It's an AI-interviewer that asks smart follow-up questions and is easy to use with voice or text interface.
