AI text analysis - Don't use 'motorised Swiss knives' for serious qualitative analysis

AI can help in analysis, but the rush to replace rigorous methods with wrong tools risks more harm than good. One head of AI remarked 'big AI' is trying to sell the same tool for all use cases.

Cover Image for AI text analysis - Don't use 'motorised Swiss knives' for serious qualitative analysis
Share this article:

For decades, quantitative researchers have had powerful computational tools at their fingertips. Excel for data manipulation, SPSS for statistical analysis, R and Python for advanced modelling... the toolkit for number-crunching grew exponentially more sophisticated as computers became more powerful. Meanwhile, people working with qualitative data such as interviews were left with... underlining pens, post-it notes, and if they were working in academic circles, legacy QDA software that offered little more than glorified word counting, word clouds, and digital versions of colour-coded highlighting.

When ChatGPT and similar tools became widely available in 2023-2025, the qualitative research community saw what looked like the answer to decades of tool envy. Here, finally, was AI that could read text, identify patterns, and generate sophisticated summaries. Suddenly everyone was talking about AI-assisted text analysis. The promise was intoxicating: feed your interview transcripts into ChatGPT, ask it to "analyse the themes," and watch insights magically appear. Academic researchers, consultants, and market researchers rushed to experiment with these new AI analysis tools.

Researchers began uploading interview transcripts and asking variations of "What are the main themes in this data?" Consultants fed client interviews into Claude and asked it to "analyse the key insights." Market researchers pasted customer feedback into Gemini and requested "a thematic analysis."

And the AI complied! It would dutifully produce bulleted lists of themes, write elegant summaries, and even generate quotes to support each theme. In demos, the outputs looked professional, read well, and felt like they saved enormous amounts of time.

There was just one problem: this isn't how rigorous qualitative analysis works. The results were not actually analyses, they were superficial and shallow summaries lacking two-way transparency, deeper insights and structure. The term "AI slop" seemed to describe them well.

The big AI companies were good at pushing one-size-fits-all solutions

The AI industry and its enterprise sales teams played a big role in supporting the gold rush. The dominant narrative around AI tools for knowledge work focuses on "copilots", which are AI assistants that sit alongside you and offer suggestions while you work in familiar applications. Microsoft Copilot, Google Workspace, Claude for Enterprise and so on are all variations on the same theme: take a general-purpose chatbot and embed it into existing workflows. Need to write an email? Ask your copilot. Need to summarise a document? Ask your copilot. Need to analyse qualitative data? Well... ask your copilot, presumably.

We recently spoke with the Head of AI transformation at one large financial institution, who called this the "motorized Swiss knife" approach. The AI players have invented a powerful new general purpose tool and are trying to sell it as the solution for everything. But a Swiss knife, even with an engine, is not the right tool for a surgeon, a master chef, a woodcutter or a fine art carver. There are specialized tools and workflows for each domain, developed over years through the experimentation and consideration of craftspeople.

The Swiss knife AI works reasonably well for some tasks like drafting emails, summarising individual documents, or answering factual questions. But it falls apart for many advanced tasks - for example complex analytical work that requires systematic methodology, transparent evidence chains, and reproducible results.

The problem is not the underlying AI engines. Models like GPT-5, Claude Opus 4.5, and Gemini 3 are remarkably capable. The problem is trying to put those powerful general engines and chatbot interfaces into everything without considering the actual needs of the experts.

To get real value from AI for serious knowledge work, you need more than a chatbot assistant. You need purpose-built harnesses that channel the AI's capabilities into methodologically sound processes.

Why you cannot simply "analyse" data with vanilla AI solutions

The fundamental issue is that asking a chatbot to "analyse your qualitative data" conflates several distinct analytical steps into a single black-box operation. As we explored in depth in our analysis of whether ChatGPT can analyse qualitative data, AI chat interfaces fail on multiple critical dimensions:

  • No systematic coding. Rigorous thematic analysis, following methodologies like Braun and Clarke's approach, requires systematic reading of all data, generating initial codes, searching for themes, reviewing themes, and defining them. ChatGPT does none of these steps, going directly from raw data to plausible sounding themes without the crucial intermediate work.

  • No transparency or auditability. When you ask ChatGPT "What are the themes?", you get answers—but you have no way to trace back from each theme to the specific quotes that support it or from each snippet to which theme it is a part of. The analytical process is opaque. You cannot check the AI's work, verify its interpretations, or demonstrate to others how you reached your conclusions.

  • Inconsistent results. Ask ChatGPT the same analytical question twice, and you will get different answers. This variability makes systematic comparison across datasets impossible and undermines the reproducibility that separates rigorous analysis from impressionistic interpretation.

  • Hallucination risk. Perhaps most dangerously, chatbots sometimes generate supporting quotes that sound plausible but do not actually appear in your data. For qualitative research that requires faithfulness to participants' actual words, this is unacceptable.

The appeal of chatbot analysis is understandable—it is fast, produces polished-looking outputs, and requires minimal learning curve. But speed and polish are not the same as rigour. As we have discussed, the ease of getting AI-generated "analyses" can actually undermine our analytical capabilities, creating an illusion of insight while bypassing the deep engagement with data that produces genuine understanding.

What AI actually offers: better tools, not replacement methods

The solution is not to reject AI for text analysis. That would be throwing away genuinely transformative capabilities. Instead, we need to reframe what AI brings to qualitative research.

Think of it this way: when Excel was introduced, quantitative researchers did not abandon statistical theory. They used Excel to execute statistical analyses faster and more accurately, but the underlying methods—hypothesis testing, regression analysis, significance testing—remained unchanged. Excel made the mechanics easier; it did not replace the analytical framework or the craft skills needed to get the right answers out.

AI can do the same for qualitative research. The rigor of approaches like thematic analysis, grounded theory, or phenomenological analysis remains essential. But AI can make executing some steps dramatically more efficient and scalable as LLM now enable computers to deal with words, concepts and to a degree meaning (even though that can be contested) in addition to just pure quantitative data.

What this looks like in practice:

Instead of asking AI to "analyse your data," you use AI to systematically read each paragraph, identify meaningful passages, code them according to clear criteria, track which quotes link to which codes, maintain transparency between themes and supporting evidence, and enable you to review and refine the analytical structure. You use the features LLMs provide as part of a systematic, structured workflow modelled after how experts approach text analysis in real life.

A proper tool for qualitative / text analysis needs to:

  • Enforce systematic processing. Rather than trying to one-shot the answer by guessing, the system should follow a proven workflow with structured phases - for example familiarisation, coding, theme development, review, refinement.
  • Maintain transparent linkages. Every theme must link clearly to the codes that support it. Every code must link to the specific quotes it represents. This creates an audit trail that allows verification and demonstrates rigour.
  • Enable comparison and refinement. Analysts need to see patterns across multiple interviews, compare how different participants addressed the same topic, and refine their categorical structure iteratively. This requires structured data representation, not conversational responses.
  • Produce professional outputs. The analysis needs to export into formats that consultants, researchers, and market researchers actually use—PowerPoint presentations, Word reports, Excel tables, and specialised formats like REFI-QDA for academic work.
  • Preserve analytical control. The human analyst must remain in charge, able to accept, reject, or modify AI suggestions. The AI accelerates work but does not dictate conclusions. After all, AI systems cannot take accountability which means they cannot really make decisions.

The future of AI-assisted qualitative analysis

We are still in the early days of AI-assisted text analysis. The technology is advancing rapidly, but our understanding of how to harness it properly for rigorous qualitative work is still developing.

What is clear is that the path forward does not lie in simply asking chatbots to "analyse" our data. That approach sacrifices the methodological rigour that makes qualitative research valuable. We can't just accept the Swiss army knife offered by the AI vendors directly, shrug our shoulders and start trying to force it to all our use cases.

At Skimle we believe strongly that the value comes from making smart tools based on tried and tested qualitative analysis workflows and expertise, and then introducing the capabilities AI provides in a careful and considered way without taking the human away from the loop. AI should enable smart experts to produce 200% quality outputs (and save time in the manual parts), not just mass produce 80% quality slop at a high speed.


Ready to experience rigorous AI-assisted qualitative analysis? Try Skimle for free and see how purpose-built AI tools can transform your text analysis while maintaining methodological soundness.

Want to understand the methodology? Read our complete guide to thematic analysis and learn how to analyse interview transcripts systematically.

About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile

Olli Salo is a co-founder at Skimle and former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile