Commercial due diligence in 2026: how AI is changing qualitative primary research

How AI is changing the synthesis of expert calls and customer reference interviews in commercial due diligence — and what it means for deal teams in practice.

Cover Image for Commercial due diligence in 2026: how AI is changing qualitative primary research
Share this article:

AI is changing commercial due diligence by making it possible to synthesise 20 to 30 expert calls and customer reference interviews in hours rather than days. Tools like Skimle process interview transcripts, surface themes, and maintain an audit trail that supports investment committee presentation — without the manual coding that previously made qualitative primary research the bottleneck in a tight deal timeline. AI interviewing tools also open a new data collection path: running structured qualitative interviews with dozens or hundreds of customers or market participants at the cost and speed of a survey, but with the depth of a real interview.

In commercial due diligence, the bottleneck has never been conducting interviews. Experienced deal teams are good at getting access and running expert calls. The bottleneck is what happens next: taking 30 transcripts, each 10,000 words, and turning them into a coherent, defensible account of what the market actually thinks about the target company, its competitive position, and the growth thesis. Manually, this takes a senior analyst three to four days. On a compressed M&A timeline, those days are often not available.

This guide covers where AI-assisted analysis fits into CDD primary research, what it changes, and what it does not.

What qualitative primary research looks like in CDD

A typical commercial due diligence engagement involves three categories of qualitative primary research:

Expert network calls. Structured conversations with industry experts, former executives, and sector specialists. These validate or challenge the market thesis and competitive landscape. A CDD engagement might involve 15 to 30 expert calls, each transcribed and needing to be synthesised into consistent findings.

Customer reference interviews. Calls with existing and former customers of the target. These test customer satisfaction, switching risk, competitive alternatives, and the durability of the target's value proposition. Customer reference interviews are the highest-signal source for assessing revenue quality.

Management interviews. Conversations with the target's leadership team, which test the quality of management, the coherence of the strategy, and the credibility of the financial plan. These are usually conducted by the most senior members of the deal team.

The synthesis challenge is largest for expert and customer calls — the volume is high, the data is unstructured, and the timeline is short.

The traditional synthesis workflow and its limits

The traditional approach:

  1. Each analyst who conducted calls writes individual call summaries (30–60 minutes per call)
  2. The team reviews all summaries together in a readout meeting
  3. A senior team member synthesises the summaries into a narrative (half day to a full day)
  4. The narrative is revised through an iterative drafting process

This works, but at cost. Individual summaries are inconsistent — different analysts emphasise different things, use different terminology, and apply different levels of detail. The synthesis meeting produces collective recall, not systematic analysis. The resulting narrative reflects the team's impressions as much as the data.

More importantly, with 30 transcripts on a 10-day timeline, there is often not enough time for a thorough read of all the data. Important signals get missed. The summary documents what was heard in the calls, not what is in the transcripts.

What AI-assisted analysis changes

Structured AI analysis inverts the workflow. Instead of analysts summarising and synthesising individually, the tools process the transcripts directly and produce a thematic structure that the team reviews and interprets.

In practice, with Skimle:

  1. Upload all expert call transcripts to a project (PDF, DOCX, or TXT)
  2. Run the automatic thematic analysis — typically completes in minutes for 30 transcripts
  3. Review the category hierarchy the analysis produces — major themes with supporting insights and quotes
  4. Use metadata variables to segment findings by expert type, expert tenure, geographic market, or any other attribute
  5. Refine, merge, or split categories as needed based on the team's domain knowledge
  6. Export the structured findings for the investment memo or report

The output is a thematic structure where every finding is backed by specific quotes from specific transcripts — an audit trail that investment committees can interrogate. "Seven of twelve customer interviews mentioned declining support responsiveness" is a defensible claim with the evidence two clicks away.

This does not replace analyst judgement. The AI identifies what was said; the analyst interprets what it means. "Seven customers mentioned declining support responsiveness" is an observation. "This pattern, combined with the management team's acknowledgement of a recent CS restructuring, suggests a deteriorating NPS trajectory that the financial model does not capture" is analysis. The second requires a human with domain knowledge. The first no longer does.

What AI-assisted CDD analysis actually produces

The practical outputs for a commercial due diligence primary research synthesis:

Market perception themes. What do experts believe about the market structure, growth rate, and competitive dynamics? How consistent is this with the management presentation?

Competitive positioning. What position does the target occupy in the minds of experts and customers? Is this consistent with the management team's self-description? Where are the gaps?

Customer satisfaction and switching risk. What do customers like and dislike? Are satisfaction themes concentrated in particular segments or geographies? Are there switching indicators — mentions of evaluating alternatives, contractual flexibility sought, relationship health concerns?

Growth thesis validation. Which elements of the management growth thesis are validated by primary research, which are questioned, and which are not mentioned at all?

Risk themes. What operational, competitive, or market risks emerge from the qualitative data that are not prominent in the management presentation?

The expert call investment: getting more from what you spend

Expert network calls cost €500 to €2,500 each. For a 20-call programme, that is €10,000 to €50,000 of primary research investment before any internal time is counted.

The value of that investment depends entirely on the quality of synthesis. A 20-call programme that produces a two-page memo of impressions returns a fraction of the value of a programme that produces a structured thematic analysis with frequency data and direct quote support.

See the expert network calls guide for how to structure the call programme and how to summarise expert interviews for the five-step synthesis approach.

Bridging qualitative and quantitative: AI interviews at CDD scale

Traditional CDD primary research faces a structural constraint. Deep qualitative interviews — the kind that surface real opinions, unprompted concerns, and specific examples — do not scale. A 20-call expert programme takes two to three weeks to schedule and execute. A 50-customer reference programme is beyond what most deal timelines allow. So deal teams make do with 15 to 20 calls and accept that the sample may not be representative.

Quantitative surveys solve the scale problem but lose the depth. A 200-customer survey tells you what proportion prefer what, but not why. For investment theses that hinge on customer satisfaction durability, competitive switching risk, or the credibility of a management growth narrative, "60% gave a score of 7 or above" is not sufficient evidence.

Skimle Ask sits between these two modes. It runs structured AI-led interviews — each 8 to 15 minutes — at the scale of a survey. Each participant receives the same core questions, but the AI follows up on substantive answers: if a customer mentions a specific concern about pricing, the AI probes for context; if they mention a competitor, it asks what they value about them. The output is a full conversational transcript for each respondent, not a tick-box response.

For CDD, this opens primary research options that were previously out of reach on a deal timeline:

Broad customer voice programmes. Send Skimle Ask to 80 or 100 customers of the target within a week. Analyse the transcripts thematically to understand satisfaction patterns, switching risk signals, and competitive positioning — with the frequency data that investment committees expect and the verbatim quotes that make findings credible.

Market participant surveys. Interview a broad sample of market participants — potential customers, channel partners, industry practitioners — to validate the market size and growth thesis with qualitative depth that a standard survey cannot provide.

Competitive customer interviews. With appropriate access, run structured interviews with customers of the target's key competitors to understand where the competitive gaps are real versus perceived.

Because Skimle Ask transcripts feed directly into Skimle's thematic analysis, the collection and synthesis steps are integrated. A 100-respondent customer voice programme can be fielded, completed, and analysed within a deal week — producing findings with both the scale of survey research and the depth of qualitative interviews.

Where CDD qualitative analysis still requires human expertise

AI processes text. It does not evaluate credibility. An expert who spent two years at a company 15 years ago is a different data source from one who left six months ago. A customer who is in contract renewal is a different data source from one who just signed. A management team interview answer that contradicts a publicly available filing means something different from one that contradicts only an informal prior conversation.

These contextual judgements require analysts who know the source and the situation. Good CDD analysis combines the speed and coverage of AI synthesis with the judgement of experienced deal professionals who can weight, challenge, and interpret what the data is showing.

The qualitative data analysis tools comparison covers the full landscape of tools available for professional research contexts.

Want to make your next CDD primary research programme faster and more defensible? Get in touch with the Skimle team to discuss how structured AI analysis fits into your deal workflow, or start with a free trial.

Related reading:


About the author

Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand their markets, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile


Sources