You run an NPS survey every quarter. The results come back: a score of 34, down three points from last time. Someone builds a slide. There is a bar chart. Leadership asks whether 34 is good or bad. A benchmark is found. The meeting moves on.
Somewhere in the same spreadsheet, there are 600 open-text comments. Customers telling you, in their own words, exactly why they scored the way they did. What frustrated them, what delighted them, what they tried to do and could not. That data sits untouched. Next quarter, you will run the same survey and the same thing will happen again.
This is where most NPS programmes lose most of their value. The score is a headline. The verbatim is the story. This post is about how to read that story properly. If your feedback goes beyond NPS surveys to include support tickets, app reviews, and other sources, our guide on analysing customer feedback with Skimle covers the broader workflow — much of what applies there applies here too.
Why the verbatim is where the real signal lives
The NPS score tells you roughly how satisfied your customers are in aggregate. It cannot tell you why. Two customers can both score you a 7 and have completely different reasons: one thought the product was fine but the onboarding was slow, the other thought the onboarding was great but a key feature they needed was missing. Both are passives. The score treats them identically.
The verbatim comments are different. When customers write a free-text response, they tell you what actually drove their score. They name the product area, the specific interaction, the moment where their experience tipped positive or negative. This is qualitative data in its most direct form: customers explaining themselves in their own words.
The problem is that qualitative data at this scale is hard to work with. When you have 600 comments, you cannot just read them and trust your impression. Human memory and attention are unreliable. You will notice the emotionally vivid responses, the ones that confirm what you already suspected, and the last ten you read. The patterns that matter statistically, the ones that affect 15% of your customer base consistently, can stay invisible.
Qualitative analysis done rigorously requires a systematic approach: a way of moving from raw comments to structured themes that reflects the whole dataset, not just the parts that caught your eye.
What most teams actually do (and what they miss)
There are a few common approaches to NPS verbatim analysis. They are all better than nothing. They are also all significantly worse than they need to be.
Reading through and summarising manually
Someone, usually an analyst or a CX manager, reads through the comments and writes up a summary. "Customers mentioned pricing a lot. There were complaints about the mobile app. A few people praised the support team." This approach captures something, but it is deeply susceptible to bias. The comments that feel memorable or striking shape the summary more than the comments that are statistically common. Quiet patterns, the ones spread across dozens of similar but not identical comments, rarely make it into the write-up.
Manual reading also does not scale. If your NPS programme runs quarterly across multiple regions, products, or customer segments, you quickly end up with more data than any individual can absorb properly. The analysis either gets rushed or skipped.
Word clouds and frequency counts
Survey tools often offer built-in word cloud or keyword frequency views. These are appealing because they are instant. They are also, largely, useless for decision-making.
Word clouds count words, not meaning. "Great" and "not great" look identical if you are just counting the word. "Slow" might mean slow to load, slow to respond, or slow to onboard. The frequency count tells you that people are using a word; it tells you nothing about what they mean by it. And the most important things customers say are often expressed in different words by different people, which frequency counts will scatter across dozens of low-count entries rather than surfacing as a coherent theme.
Excel filtering and pivot tables
More sophisticated teams will tag comments manually in a spreadsheet: promoter/passive/detractor, product category, topic area. This is closer to proper thematic analysis in spirit. The execution usually falls short because the tagging categories are invented on the fly, applied inconsistently across different people doing the coding, and too broad to be useful. "Product feedback" and "service feedback" are not categories. They are buckets.
The deeper problem is that these approaches treat the verbatim as an afterthought to the score, something to scan for confirmation rather than something to analyse for discovery. The question being asked is "what do the comments say about this score?" rather than "what are the comments actually telling us?"
The case for systematic thematic analysis
Thematic analysis is the methodological framework that qualitative researchers use to work with open text at scale. The basic idea is straightforward: you read through data carefully, identify patterns, group patterns into themes, and build a structured account of what the data contains. What makes it rigorous is the discipline of applying that process systematically, covering the whole dataset, using consistent criteria for what counts as a theme, and being explicit about how you got from raw data to findings.
For NPS verbatim analysis, this means:
- Reading through enough comments to understand the range of things customers are talking about
- Defining a theme structure that captures the actual topics in the data, rather than the ones you expected to find
- Coding every comment against that structure, not just a sample
- Writing summaries for each theme that reflect what customers in that group actually said, including the range and the variation, not just the most common version
This sounds like a lot of work. It is, if you do it manually. A proper analysis of 600 NPS comments, done by hand, might take 20 to 30 hours once you include creating a category framework, coding each comment, writing summaries, and checking for consistency. For most teams, that is not realistic.
But the methodology itself is sound. The question is whether you can apply it without the time cost. As we discuss in the broader context of analysing open text responses at scale, the right AI tools can do the heavy lifting of pattern recognition while keeping you in control of the interpretation. Our customer feedback analysis guide walks through this end-to-end with a real dataset, showing exactly how the import, theme discovery, and metadata exploration steps work in practice.
How metadata transforms NPS verbatim analysis
Here is something most NPS verbatim analysis misses completely: the comments do not exist in isolation. Every comment comes from a customer with context. They gave a particular score. They use a particular product. They are in a particular region, or market segment, or tenure band. They submitted this response in a particular month.
That metadata is what turns a flat list of comments into a structured dataset you can interrogate.
Consider a simple example. Your overall NPS score is 34. Your thematic analysis of verbatim comments shows that 18% of comments mention onboarding friction. That is a useful finding. Now you add the metadata layer: you find that 43% of detractor comments mention onboarding friction, versus 4% of promoter comments. Suddenly you have a strong hypothesis about what is driving your NPS score, not just a theme.
Or: you find that onboarding friction comments are heavily concentrated among customers who signed up in the last three months, suggesting that a recent product change or process change is creating a problem that did not exist before. Or they cluster in one region, pointing toward a localisation or support issue rather than a product problem.
Using metadata variables in qualitative analysis is the difference between knowing what customers are saying and understanding what it means for your business. Score segment (promoter, passive, detractor) is the most obvious metadata variable for NPS analysis, but it is just the start. Product, time period, customer segment, region, and channel can all become dimensions along which your verbatim themes suddenly look very different.
Score segment analysis: what are your detractors actually saying?
The most immediate application of metadata in NPS verbatim analysis is cutting the themes by score segment. This is something that is conceptually simple but surprisingly rare in practice.
Promoters and detractors are not just customers at opposite ends of a scale. They often have qualitatively different relationships with your product. Promoters may have found a use case that works well for them and want more depth or integrations. Detractors may be hitting a fundamental experience failure. Passives often represent unmet potential: customers who liked the product enough to stick around but never found the thing that would make them enthusiastic.
Analysing the verbatim separately by score segment tells you which themes are genuinely associated with high or low satisfaction, versus which themes come up across all segments regardless of score. A theme that appears equally among promoters and detractors probably reflects something customers care about but have mixed experiences with. A theme concentrated heavily among detractors is a candidate for your top priority list.
Time period: spotting trends and anomalies
If you run NPS surveys regularly, the date of submission is a piece of metadata worth taking seriously. When you analyse verbatim themes across time, you can see whether issues are persistent or temporary, whether they are improving or getting worse, and whether anomalies in the score correspond to identifiable events in the product or service experience.
A spike in comments about a specific feature coinciding with a product release, a cluster of service quality complaints tracking a period of high growth, a sustained rise in pricing concerns over several quarters: these patterns are invisible if you only look at individual survey cohorts. Over time, they tell a story about the trajectory of the customer experience.
Product, region, and segment: finding where the problems live
Aggregate NPS scores hide as much as they reveal. A score of 34 across your whole customer base might be a 52 in one product line and a 19 in another. The verbatim themes will be different too, and understanding those differences is what makes the analysis actionable.
When you break verbatim themes down by product, region, or customer segment, you move from "customers have concerns about reliability" to "reliability concerns are concentrated in the enterprise segment using the data export feature in the EU region." The first statement requires further investigation before anyone knows what to do. The second statement is a brief for a product team.
This is the practical value of proper qualitative analysis at scale: not just finding themes, but finding where they live and what they correlate with.
A practical workflow for NPS verbatim analysis with Skimle
Here is how to run NPS verbatim analysis end to end in Skimle. The customer feedback walkthrough covers the same workflow with a full example dataset if you want to see it before setting up your own project.
Prepare and import your data
Export your NPS data as a CSV with every column intact: response ID, score, score segment (promoter/passive/detractor), free-text comment, product, date, region, and any other fields you collect. Do not strip columns before importing.
In Skimle's import screen, you map each column to its role. The response ID becomes the document title. The free-text comment column becomes the content Skimle analyses. Everything else — score, segment, product, date, region — becomes metadata attached to each document. That mapping is what makes the analysis genuinely powerful: every theme and insight Skimle surfaces later carries its metadata context, so you can always ask "which customers said this, and what score did they give?" and get a direct answer rather than a guess.
One practical note: include very short responses ("Good", "N/A", "Nothing to add"). They are genuine data points. Excluding them silently skews your theme distribution toward customers who wrote more, which tends to over-represent dissatisfied customers.
The import and export workflow is worth setting up as a repeatable process from the start, especially if you run NPS surveys quarterly. Each new cohort can be added incrementally to the same project without rebuilding the analysis from scratch.
Let Skimle identify the theme structure
Once the data is imported, Skimle reads all the comments and builds a theme structure from the bottom up. It does not start from a category list you define in advance. It discovers what is actually in the data.
In a typical NPS verbatim dataset this produces themes you expected — onboarding friction, pricing perception, product reliability, support quality — and things you did not. A cluster of comments about a specific workflow that keeps failing. A pattern of confusion around a feature that the product team assumed was self-explanatory. Quiet frustrations that no individual comment makes obvious but that appear consistently across dozens of responses. These are the findings that keyword search and manual reading consistently miss, and they are often the most actionable ones.
Asking ChatGPT to "summarise these 600 comments" gives you a paragraph but, as we have explored elsewhere, no way to verify what was included or left out. Skimle's structured approach means every theme links directly to the comments that generated it. When a product lead asks "where does this come from?", you can show them. That two-way transparency, from high-level theme back to individual source comment, is what makes the findings credible rather than just plausible.
Review and refine the themes
Skimle's initial theme structure is a starting point. Spend fifteen to twenty minutes reviewing it before moving on. The questions to ask: are these categories distinct enough to be useful? Are any too broad? Is anything missing?
This is where your product knowledge does work the AI cannot. If Skimle has grouped "slow loading times" and "the app crashed during export" into a single performance theme, you may want to split them — one is a UX problem, the other is a reliability issue with a different owner. If it has separated "pricing is too high" from "pricing is confusing", you might decide those are really the same underlying problem expressed differently and merge them.
In Skimle you can merge themes, split them, rename them, and read the underlying comments for any category to verify that the grouping makes sense. Refine until the structure reflects meaningful distinctions for your business, not just for the data in the abstract.
Explore themes by score segment, product, and time
With the theme structure confirmed, Skimle lets you cut the analysis by any metadata dimension. Start with score segment: how does each theme distribute across promoters, passives, and detractors? A theme concentrated in detractor comments is a different priority from one that appears equally across all three groups.
Then cut by product, region, time period, or customer segment. This is where the aggregate picture breaks apart into something actionable. "Reliability complaints represent 14% of all comments" is a finding. "Reliability complaints represent 38% of detractor comments from enterprise customers in the past two months, up from 9% in the prior quarter" is a brief for a product team.
For each significant pattern, read the underlying quotes directly in Skimle. The distribution tells you where to look; the quotes tell you what is actually happening. Pull three to five representative verbatims per major theme — these are what you will use in the presentation, because a well-chosen quote lands harder than a percentage in a bar chart.
Generate the report and turn themes into recommendations
Skimle generates a structured report across the full dataset: top themes, their prevalence, representative quotes, and a view of where the analysis points. Use this as your working document, not the final output.
The deliverable your team needs is not a list of themes. It is a set of prioritised recommendations with evidence. For each significant theme, the write-up should cover: what the theme is, how prevalent it is, which segment or period it concentrates in, what customers specifically said, and what action you are recommending. Not "customers have concerns about pricing" but "pricing concerns appear in 22% of detractor comments, predominantly from enterprise customers, who consistently reference competitor pricing rather than absolute cost — recommendation: targeted value communication for enterprise renewal conversations."
That level of specificity is only possible when the analysis has been done systematically. Skimle gives you the structure to get there without the manual effort that would otherwise make it impractical.
Making NPS verbatim analysis a regular practice
The teams that get the most from NPS verbatim analysis are the ones who treat it as a routine rather than a project. Running the analysis once after a major survey gives you a point-in-time view. Running it after every survey cohort, with consistent metadata and consistent methodology, gives you a longitudinal picture.
That longitudinal view is where some of the most valuable insights emerge. You can see whether a theme that appeared six months ago has been addressed (the frequency should drop). You can see whether a new theme is emerging before it becomes a crisis. You can track whether your NPS improvement efforts are actually changing the things customers talk about, not just the score.
Qualitative data analysis tools vary considerably in how well they support this kind of ongoing workflow. The key things to look for are the ability to add new documents incrementally without rebuilding the analysis from scratch, consistent metadata handling across cohorts, and the ability to compare theme distributions across time periods.
This is also an argument for democratising access to the analysis within your organisation. When verbatim analysis requires a specialist researcher and several days of work, it happens once a year at best. When it is fast enough that a product manager or CX lead can run it themselves after each survey cycle, it becomes part of how the organisation actually uses its customer data.
Ready to turn your NPS verbatim comments into structured, actionable themes? Try Skimle for free and bring your next CSV of open-text responses into a rigorous analysis workflow. From import to themes to metadata breakdowns, the whole process takes hours rather than days.
Want to go deeper on the methodology? Read our guide on thematic analysis methodology, explore how metadata variables transform qualitative analysis, and see how Skimle handles customer feedback end to end.
About the authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
Frequently asked questions
How do I get started with NPS verbatim analysis if I have never done it before?
Start by exporting your NPS survey data as a CSV, making sure to include the free-text comment column alongside score, date, and any other metadata you have (product, region, customer segment). Import the CSV into a qualitative analysis tool that can handle open text systematically. Let the tool identify an initial theme structure from the data, then review the themes to make sure they reflect meaningful distinctions for your business. Cut the results by score segment first (promoter, passive, detractor) to see which themes are associated with satisfaction and which with dissatisfaction. That first cut usually produces the most immediately actionable findings.
How do I handle very short or unhelpful verbatim responses like "Good" or "N/A"?
Include them in your dataset but do not try to force them into your theme structure. Most qualitative analysis tools will recognise these as low-information responses and handle them separately or flag them. They are genuine data points: a customer who scores 9 and writes "Good" is telling you they have no strong views beyond general satisfaction, which is a valid finding in itself. Excluding them silently would skew your theme distribution toward customers who wrote more, which are often the more dissatisfied ones.
How many NPS verbatim comments do I need for meaningful thematic analysis?
There is no strict minimum, but below around 50 to 100 comments it becomes harder to distinguish systematic patterns from individual variation. With 200 or more comments, thematic analysis typically produces reliable and stable results. Above that, the main constraint is time (for manual analysis) or tool capability (for AI-assisted analysis). Most organisations running regular NPS surveys will have well above this threshold, which is part of why systematic analysis matters more than casual reading.
How do I prioritise which themes to act on after the analysis?
Start with the intersection of frequency and score-segment skew. A theme that appears in 30% of detractor comments is a higher priority than one that appears in 5%, all else being equal. Within high-frequency themes, look at whether the issue is concentrated in a specific product, region, or customer segment, because concentrated problems tend to be more fixable than diffuse ones. Then assess severity from the verbatim itself: a theme where customers describe significant disruption to their workflow deserves more urgency than one where they are requesting a nice-to-have improvement.
How do I present NPS verbatim findings to stakeholders who only care about the score?
Frame the findings as explanations of the score rather than alternatives to it. "Our NPS is 34. Here is what is driving it, and here is what we can change" is more persuasive than presenting thematic analysis as a separate exercise. Use the verbatim quotes strategically: one well-chosen quote from a representative customer will land harder than a percentage in a bar chart. And connect each theme to a proposed action with a plausible path to improving the score, so the analysis feels actionable rather than descriptive.
