Last week we published a blog from Henri and Farah called What if? Reanalyzing our Qualitative Study with Skimle AI where they tested how well Skimle would be able to analyse interview data they had used in their recent article published in Organizational Science. They compared Skimle's findings with their own to discover where AI could help, and where researcher judgement was still essential and irreplaceable.
This article spurred great discussion in different channels - as one would expect for something touching researchers and the way they do research. We've earlier shared our views on how to use AI in qualitative research and the importance of quality as the differentiator in the era of AI, as well as warned against simplistic use of AI chatbots for qualitative analysis, but this new discussion spurred new thoughts and compelled us to write further on this topic.
The core argument: Using AI in analysis isn't acceptable, full stop.
Let me first summarise the case against using AI in qualitative analysis and instead keeping to the traditional methods of manual coding. The classic method, often done by hand or using older software like MAXQDA, Nvivo or Atlas.ti, is what most researchers grew up with as the only option. When faced with reports of people performing analysis with the help of AI systems, there are a number of arguments against it.
Generative AI models are not doing actual qualitative analysis. The core argument against AI-powered qualitative analysis is that large language models don't actually understand what they're analyzing—they're performing sophisticated pattern matching on text. When a researcher conducts thematic analysis, they engage in an interpretive process that involves:
- Deep contextual understanding: A human researcher reading an interview transcript about workplace burnout understands the cultural context, organisational dynamics, and unspoken power structures behind what's said. They notice what isn't said. They recognise when a participant's tone shifts, when they become defensive, or when they're performing socially desirable responses. An AI model sees only tokens and statistical relationships between words.
- Iterative interpretation: Qualitative analysis is fundamentally iterative. You read transcripts, develop preliminary codes, return to the data with new understanding, revise your interpretations, notice contradictions, and gradually build theoretical frameworks. This process is cognitive and reflexive, meaning you change as you analyse, and your changing understanding shapes subsequent analysis. AI models apply the same algorithm to every piece of text, incapable of this kind of intellectual evolution.
- Abductive reasoning: Unlike deductive or inductive reasoning, qualitative researchers engage in abduction—the creative process of generating new explanatory frameworks from surprising findings. When you notice an unexpected pattern in your data, you don't just categorise it; you ask "what underlying phenomenon would explain this?" and develop novel theoretical insights. AI models can identify patterns, but they cannot engage in the creative leap required to theorise why those patterns exist in ways that challenge existing frameworks.
As one critic noted, AI is doing "glorified word counting" dressed up as analysis. It can tell you that the phrase "work-life balance" appears frequently in your interview data and even cluster similar concepts, but it cannot tell you that what participants mean by work-life balance fundamentally differs between gig workers and salaried employees, or that their use of this corporate buzzword often masks deeper critiques of capitalism that they're reluctant to voice directly.
One of the most powerful aspects of qualitative research is that different humans, with different theoretical frameworks and lived experiences, will interpret the same data differently... and these differences are productive. Your identity, your disciplinary training, your political commitments, and your personal history all shape how you see patterns in data. But AI models apply standardised algorithms across all data. They impose a false uniformity on interpretation, flattening the rich diversity of analytical approaches into whatever patterns their training data has encoded. When everyone uses the same AI tool, we lose the intellectual diversity that makes qualitative research valuable. Research becomes homogenised, predictable, and less capable of generating genuinely novel insights.
Using AI reduces real thinking. The more insidious problem is what AI does to the researcher's cognitive processes. When we outsource analytical work to AI, we don't just save time—we fundamentally alter how we engage with data and ideas. Research on cognitive offloading shows that when we delegate thinking tasks to external systems, we reduce our own capacity to perform those tasks. Just as GPS navigation has been shown to reduce our spatial memory and wayfinding abilities, using AI for thematic analysis weakens our capacity for close reading, pattern recognition, and interpretive insight. A researcher who relies on AI to summarise their interviews never develops the deep, embodied understanding that comes from reading transcripts multiple times, sitting with ambiguity, and struggling through interpretive challenges.
AI tools generate outputs (themes, summaries, visualisations and so on) that look like analysis. This creates "the illusion of explanatory depth": we believe we understand something because we have a clean output in front of us. But understanding doesn't come from having a summary; it comes from the process of creating that summary. When AI does this work, researchers get the product without the process, the answer without the learning. A PhD student who uses AI to summarise their dissertation interviews might finish faster, but they haven't actually developed expertise in their subject matter. They've collected themes about, say, teacher burnout, but they haven't internalized the nuances, contradictions, and emotional textures that come from deep engagement with participant voices. They can write about the themes AI identified, but they cannot speak with authority about what those themes mean because they haven't done the intellectual work of interpretation.
Some of the most important discoveries in qualitative research come from unexpected moments—when you notice a strange phrase, a surprising contradiction, or an outlier case that doesn't fit your emerging framework. These moments happen during slow, attentive reading and re-reading of data. They require boredom, confusion, and patience. They emerge from the kind of deep attention that shallow AI tools, by just spitting out an answer, actively discourage.
The erosion of scholarly rigour. Finally there is the question on how AI use in analysis undermines the very foundations of scholarly credibility. When you present findings from AI-analysed data, can you really claim you "know" your data? Can you answer detailed questions about edge cases, contradictions, and alternative interpretations? Can you defend your analytical choices? Traditional qualitative methods require researchers to be intimately familiar with every piece of data, to justify every coding decision, and to demonstrate reflexivity about how their biases shaped interpretation. AI analysis shortcuts this process, producing researchers who can describe themes but cannot engage in the kind of deep, nuanced discussion that marks true expertise. This is particularly problematic in academic contexts where peer reviewers and examiners expect evidence of rigorous, reflexive engagement with data. Many organisations have clarified that AI tools can't be listed as authors, as they can't take responsibility for the work, it is the researchers that must stand behind the findings.
In essence, the argument against AI in qualitative analysis is that it produces outputs that are not valuable in the first place, and in the process destroys human cognitive capabilities and the entire credibility and rigour of qualitative research. If we value deep understanding over fast outputs, intellectual growth over efficiency, and interpretive diversity over standardisation, then AI-powered analysis represents a fundamental threat to the values and practices that make qualitative research meaningful.
Our counter-point: Done right, AI augments thinking and improves the quality of analysis
Let's start by noting that the possibility to do things wrong is inherent in any tool. With Word you can write dada, with Excel you can have senseless tables (it's been called the most dangerous app on the planet by Forbes...), and with R you can mask anything as quantitative analysis.
With AI, we're all seeing the avalanche of "AI slop" being posted or passed on as work, created by people who think their clever prompts produce novel outputs worth the admiration of others. And there are people trying to use basic ChatGPT for qualitative analysis and rapidly discovering it doesn't work that well.
But the fact that AI can be misused should not prevent us from putting it to good use. From our point of view, AI can be greatly helpful for qualitative analysis if the following pre-conditions are met.
Condition #1: Humans retain accountability over the output
The accountability for quality remains with the expert using the tool. This should be true both in terms of legal accountability, but even more so on the level of mindsets. Humans need to retain full ownership of their process and outputs without resorting to any "the computer told me so" attributions.
Condition #2: Time saved in specific steps is invested in going deeper or broader, not on holidays
In practice there is always a finite amount of effort that can be used on a specific topic, be it academic or applied research. The question is where that time is most valuable in terms of generating value. Before AI, by necessity the coding of data and other manual steps took significant time. Tools like Skimle enable making that phase shorter which allows for more time to dig deeper, for example analysing the resulting categories or conducting more interviews. In other cases, the value might come from broadening the scope or researching additional questions.
Condition #3: AI tool used provides two-way transparency and full control
Humans can only take ownership and dig deeper into the data if the AI tool provides the necessary transparency and control. Simple AI chatbots or tools like Tailwind do not generate, show or allow the editing of any interim data structure and because of that, there is no way to verify the analysis or edit specific details. AI-assisted qualitative analysis requires a stable process with full two-way transparency for it to be defensible.
Condition #4: AI tool is built on actual qualitative analysis approaches
Imagine you had your research assistant rapidly read through all your notes in one go, and then you would ask a question from her. That is essentially what is happening if you are using basic one-shot LLM prompts to "analyse data". You get something back that mimics an analysis, but if you look deeper you often discover it's glossing over important details, hallucinating and adding things and overall not really analysing the data in depth. It is not a real qualitative analysis, just like you wouldn't call the chat with the assistant qualitative analysis.
To be useful, AI should follow proven and tested qualitative analysis flows and frameworks, for example thematic analysis or grounded theory. AI can be used to automate individual steps in the flow, but there needs to be enough structure to ensure the AI is indeed doing analysis and not just mimicking it.
Condition #5: AI tool follows ethical and security guardrails
Finally, any AI needs to be within the bounds of ethical and security guardrails. For example, storing data and doing computations needs to be clearly documented and happen in trusted environments (e.g., inside EU for European data). The use of AI features such as inferring variables or assessing sentiment of quotes needs to be documented transparently and the possible biases mitigated clearly. The guardrails will differ based on where and how the research is conducted.
The reality of 2026 is that there are millions of knowledge workers using vanilla chatbot AI to perform “analysis” of documents which in reality is a black box, often hallucinated answer fully lacking underlying analysis and structure. Very little thinking is possible in such a workflow. Skimle uses actual thematic analysis steps rather than one-shot answers, and provides full two-way transparency (source to insight; insight to source) and control (edit categories and coding). It helps the user take control of the dataset and exercise their thinking and judgement.
What do you think?
Navigating this debate is a hot topic in 2026, both on the level of individual academics considering how to use AI for qualitative analysis and senior executives of large organisations considering how to position their company with respect to AI.
We believe the stance of ignoring AI and remaining in full on artisan mode for analysis is not sustainable. Access to better tools across human history has meant the ability to produce better quality results, they are the physical embodiment of human innovation. But tools can also be used recklessly, and like with any power tool, the conditions for success need to be in place.
Ready to analyse your qualitative data with both speed and rigour? Try Skimle for free and experience systematic AI-assisted analysis with full two-way transparency from every insight back to source data.
Want to learn more about qualitative analysis methods? Read our guides on thematic analysis methodology, how to conduct effective interviews, and choosing the right qualitative analysis tools.
About the Authors
Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organization Science, and Strategic Management Journal. His research focuses on organizational strategy, innovation, and qualitative methodology. Google Scholar profile
Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile
