Death of SaaS... or the renaissance of better software?

SaaS valuations are crashing and commentators are calling it the 'SaaSpocalypse'. We think the story is more nuanced - and for well-designed software built on genuine domain knowledge, this moment is an opportunity, not a threat.

Cover Image for Death of SaaS... or the renaissance of better software?
Share this article:

The financial press has a new favourite word: SaaSpocalypse. In January 2026, the S&P North American software index posted its biggest monthly decline since October 2008. When Anthropic unveiled Claude Cowork on 3 February, $285 billion in software market capitalisation evaporated in a day -- Thomson Reuters fell 16%, legal software providers 12-20%. By mid-February, approximately $1 trillion in enterprise software value had been destroyed.

So: is SaaS dead?

We run a SaaS company. You might expect us to be defensive or even dead. We are neither :)

Yes, some SaaS is suffering and dying. For the rest of us, this feels like the generational opportunity to build software.


The exaggerated threat: customers coding their own tools

The most dramatic and frequently repeated version of the "SaaS is dead" narrative is that AI coding tools will let customers build their own bespoke software, replacing off-the-shelf subscriptions entirely.

We think this will remain a marginal phenomenon. Shipping a nice front end and landing page is maybe 2% of the work of running a production software product. The 98% of security, compliance, edge cases, uptime, integrations, ongoing improvements are where the challenges start, and here exciting tools like Lovable start to feel less loveable.

Most organisations simply do not want to be in the software business. Simple, commodity-tier tools with no real differentiation may well be replaced by AI-generated scripts. Everything else is not going away just because someone in the organisation fires up a coding agent and makes a prototype version of the tool.


The three threats that actually are real

What should concern SaaS incumbents -- and what explains the market's reaction -- is a different set of dynamics.

First: generic AI tools now replace some SaaS directly. If a product's core functionality is generating a document, summarising text, or answering questions from a knowledge base, a general-purpose AI tool already does this adequately. The market cap losses are sharpest for companies with thin value layers built on commodity capabilities.

Second: AI lets new entrants build competing products dramatically faster. If a startup with deep domain expertise and a few coders using Claude Code can now build a working product that previously took five years to develop, the competitive moat around legacy players shrinks significantly. First-to-market and having a large set of features for others to replicate was once a genuine defence; it is becoming less so.

Third: AI enables fundamentally different product architectures. It is not just that competitors can build the same thing faster -- they can build something structurally better. As Bain has argued, "for incumbents, bolting AI onto legacy systems won't be enough. This is a platform shift, not a feature war."

This platform shift is particularly dramatic for qualitative data analysis.

For most of computing history, computers were good at numbers and bad at text. They could store and search text, but they could not understand it. This is why qualitative analysis remained a largely manual process for decades: there was no computational alternative to a human reading a passage and deciding what it meant. Simple word match tools were useless in detecting nuances or inferring meaning. Word clouds were the pinnacle of text analysis using computational methods.

That has changed fundamentally. Large language models can now read an interview transcript and identify key themes with a degree of accuracy that would have seemed implausible five years ago. With proper workflows, AI can generate a first-pass categorisation of 300 survey responses faster than a researcher could read the first ten.

This is the moment qualitative data analysis joins the computational era in full. And it has arrived precisely when the dominant players are companies with twenty-year-old codebases built around manual, document-by-document workflows.


Why incumbents struggle when software gets smarter - example of academic qualitative analysis software tools

Consider what happens when a legacy qualitative tool tries to add AI. The architecture was built around a manual coding workflow: a researcher reads a document, highlights a passage, assigns a code, repeats. The entire data model, user interface, and export format were designed around this process. Years were spent perfecting how to manually code each segment of text with a code as this action was what researchers would spend 95% of their time doing with the tools.

Adding AI to this architecture means adding little AI sprinkles on top of the existing paradigm. Atlas.TI's "AI Coding" works by coding one segment at a time, and results in sprawling codebooks and no cohesion when you attempt to do it at scale. MAXQDA's Tailwind is a secondary bolt-on platform using AI to summarise documents and generate surface-level themes - lacking two-way transparency and with practically no real integration to the main MAXQDA software. While these platforms have genuine strengths built over twenty years, their AI additions feel like exactly what they are: features grafted onto an architecture designed before AI was a realistic part of the workflow.

Researchers who try these AI-assisted features tend to reach the same conclusion: the outputs are surface-level. You get theme labels and some thematic content, but not an underlying coded project with precise quotes mapped back to source documents.

These tools cannot change their core data model without breaking everything they have already built -- but their current model cannot take full advantage of what AI now makes possible.


Where Skimle sits in this picture

The great SaaS unbundling is destroying value in software that was always thin: tools that automated mechanical processes without genuine understanding of the domain they served. For software built on deep domain knowledge, AI makes the product dramatically better without making it replaceable. The question shifts from "can I automate this workflow?" to "can I build something that genuinely understands this domain?" That is a harder bar to clear - and one that generic AI tools, by definition, cannot clear.

We are aware of all three threats described above and face them clearheadedly.

Generic AI replacement is the one we keep the closest eye on. The value in Skimle is not in running a single AI call over a document: it is in the full workflow from audio transcription and structured import through categorisation, manual refinement, and metadata pattern analysis - with two-way transparency between every insight and its source text throughout. That is not something a general-purpose AI delivers. But we do not take it for granted, and it shapes what we build.

Skimle is not a generic "chat with your documents" tool or "AI please analyse these interviews" prompt. It is built from a specific understanding of how qualitative analysis actually works - how categories should be structured, how to maintain an audit trail from insight to source, what a defensible coding scheme looks like for publication. Henri has published over a dozen peer-reviewed qualitative studies in top journals. Olli has conducted over a thousand professional research interviews. That depth of domain knowledge is encoded into the product at every level, from the data model to the REFI-QDA export format.

Threats two and three - faster development and new capabilities unlocked by AI - are on our side. We built Skimle in an era where AI is a first-class part of the system, not a bolt-on. The incumbents we compete with are twenty-year-old products. We can move faster than they can, and we started from a different place.

There is a fourth trend on our side: Jevons paradox. This means that as the cost and complexity of doing high-quality qualitative analysis goes down, there will be new demand for it. We're already seeing this play out with our consulting and market research customers, where proper in-depth analysis was previously impossible because of tight timelines. With tools like Skimle, they can now rigorously analyse their interview notes, due diligence reports, open-text answers or any other qualitative data. We're also seeing people embrace open-text answers much more in surveys, as analysing them no longer means days of pain in front of a computer.

This leads us to believe that this is the moment. Computers can now understand text. The existing tools were not built for that world. Skimle is.


Ready to try qualitative analysis built for the AI era? Start with Skimle for free and see what it means to work with a tool designed around AI from the ground up, not one that added it later.

Want to understand what Skimle actually does differently? Read our guide on how Skimle's end-to-end workflow handles qualitative data, how our manual editing tools let you retain full methodological control, and how metadata analysis surfaces patterns you would otherwise miss.


About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile

Olli Salo is a co-founder at Skimle and former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile