HR surveys - moving from meaningless numbers to deep insights using AI interviewers

Annual engagement surveys produce scores, not understanding. AI interviewers now make it possible to gather rich qualitative insights from hundreds of employees at the cost and speed of a survey.

Cover Image for HR surveys - moving from meaningless numbers to deep insights using AI interviewers
Share this article:

Every spring, or autumn, or whenever the corporate calendar dictates, the same email arrives in inboxes across the organisation. "Your opinion matters. Please take 10 minutes to complete our employee engagement survey."

Ten minutes becomes twenty. Twenty becomes thirty. The questions range from "On a scale of 1 to 5, how much do you understand the company's strategic direction" to "My manager supports my professional development" (strongly agree to strongly disagree). By the end there is a little box for open comments, which most people skip because they are already exhausted, or because they do not believe anything will change.

Six weeks later a presentation lands in the executive team meeting. Engagement is 7.8 out of 10, up from 7.4 last year. Purpose scores are strong. Well-being is down. This department is green. That team is bright red.

Everyone nods. Some executives feel vindicated. Others feel concerned as their teams are showing less green scores. The question that hangs in the air, often unasked: what is really behind these numbers, and even more importantly, what can we actually do about it? Since the narrative and colour is absent, random anecdotal evidence and gut feelings fill the void. "Yeah it's been a tough year given the recession... let's hope next year is easier". That concludes the session.

This is the central problem with the way most organisations survey their people. The numerical survey is designed for scale and to measure and compare known entities, not to create new or deeper insights. And without understanding, the scores become an end in themselves rather than a means to better decisions.

While there is value in benchmarking and looking at trends over time, many HR professionals are feeling frustrated that they're not living up to the expectations of really providing understanding to what is happening at scale and advising managers on what exactly could be done to make things better.

Why numbers tell you what, but not why

There is nothing wrong with knowing that engagement is 7.8. It is useful data. You can track it over time, compare it across teams, and identify where things are going wrong. But a score is the beginning of the investigation, not the conclusion.

When engagement drops in a particular business unit, what you actually need to know is whether people are frustrated with their manager, overwhelmed by workload, unclear about the strategy, worried about job security, or something else entirely. The survey that flagged the problem cannot tell you the answer, because it was not designed to ask follow-up questions, probe for context, or capture the nuance of how people actually feel. And those surveys that try to include questions for every possible root cause tend to become so long and tedious that they themselves become a driver for frustration.

The open-text comment box at the end of a long survey is a partial solution. But in practice, by the time a respondent reaches it, survey fatigue has set in. Responses tend to be short, vague, or absent altogether. The format does not invite honest, thoughtful reflection, and many employees have learned, through experience, that the box is where concerns go to be ignored.

Research by Gallup consistently finds that only around 23% of employees globally feel genuinely engaged at work, yet most organisations' engagement survey scores look considerably higher. The gap suggests that what people tick on a survey and what they actually think and feel can diverge substantially. Employees game the scores, either positively (wanting to seem engaged) or negatively (using the survey as a protest vote), and neither produces actionable data.

Meanwhile, survey fatigue is a growing problem. Response rates have been declining for years. Organisations report increasing difficulty getting people to engage meaningfully with annual surveys, and when they do respond, the quality of responses is often low. This is not because employees do not have views. It is because the format does not feel like it is designed to surface those views in a useful way.

The alternative that never scaled

Organisations have always known that interviews are better than surveys for understanding what people actually think. A good interview, whether conducted by a skilled HR professional, an internal consultant, or an external party, can get to the heart of an issue in thirty minutes that a survey would circle around for thirty questions without ever touching.

The challenge is obvious: interviews do not scale. A company with 1000+ employees cannot interview all of them. Even with a sample of 5%, you are looking at a significant investment of interviewer time, scheduling effort, and then the analysis work that follows to get 50 responses. Traditional qualitative analysis, including reading transcripts, identifying themes, coding passages, and writing up findings, is a skilled and time-consuming process. For a project covering 50 interviews, that analysis alone can take weeks.

Then there is the credibility problem. Qualitative findings have sometimes struggled to command the same authority as quantitative data in boardroom settings. "Our analysis suggests that employees feel uncertain about the company's direction" can be met with scepticism. How many people said that? How do you know it is representative? How are you sure you are not reading what you expected to find?

These are fair questions. Historically they were difficult to answer in a fully transparent way. Qualitative analysis involved human judgement at every step, which is both its strength and its vulnerability.

So organisations settled for the trade-off: quantitative surveys for breadth, qualitative work for depth, but never both at scale. The annual survey delivered the numbers. Ad hoc focus groups or deep-dives provided colour. And the real understanding of what was driving the scores remained elusive.

What has changed

Two developments have shifted this equation significantly.

The first is AI-powered interviewing. It is now possible to deploy a conversational AI agent that conducts structured interviews with employees at scale. The agent follows an interview guide, but unlike a rigid survey form, it can ask follow-up questions based on what the respondent says, probe for examples, and manage the pacing of the conversation. Respondents answer in their own words, at a time that suits them, without needing a human interviewer to be present. Answers can be given in chat through the familiar channels people use day to day, or with voice to make answering even easier.

This means an organisation can now collect rich qualitative data from hundreds of employees, not just the sample that fits into a focus group.

The second development is systematic qualitative analysis at scale. Once you have hundreds of interview responses, you face the same analysis challenge that has always made qualitative research expensive and slow. Reading hundreds of transcripts, identifying recurring themes, cross-referencing insights across respondent groups, and writing up findings is not something one person can do quickly or reliably.

Modern AI analysis tools can now process large sets of qualitative data systematically, identifying themes and sub-themes across documents, linking each finding back to the specific passages that support it, and making the analysis transparent and editable. This addresses the credibility problem directly: every insight traces back to a verbatim quote, and the entire coding structure can be reviewed and adjusted by a human analyst.

Together, these two developments make it possible to conduct real qualitative research at the reach of a large-scale survey.

We built Skimle Ask to make it possible to combine the breadth of quantitative analysis with the depth of qualitative analysis

Skimle Ask is an AI interviewer built into the Skimle platform. Here is how a typical HR project using it works.

Creating the interview guide. You start by describing your research objective to Skimle's AI assistant, for example, "We want to understand what is driving the recent decline in engagement scores in our operations division." The AI drafts a structured interview guide covering the key areas you want to explore. You review and refine it, adjusting questions, changing emphasis, and adding topics specific to your context. For practical guidance on what makes a good interview guide, see our article on how to write the perfect interview guide. The whole process takes minutes and Skimle Ask guides you all the way.

Collecting responses. Once the guide is ready, you share a link with employees. They open it when convenient, on any device, and begin a conversation with Skimle Ask. The AI interviewer introduces the purpose of the survey and works through the topics in the guide. When a respondent gives a detailed answer, it follows up with a relevant probe. When an answer is vague, it asks for a specific example. When the conversation is running long, it prioritises remaining questions.

Employees can complete the interview whenever they have a few minutes, whether at their desk, at home in the evening, or during a commute. And pause and continue as needed. There is no need to coordinate schedules with a human interviewer. Anonymity can be guaranteed, which typically increases the honesty of responses considerably.

Analysing the data. Once responses have been collected, they feed directly into Skimle's analysis engine. The AI workflow reads each response systematically, identifies the insights it contains, and codes them against a category structure that it builds from the data itself. You can also define your own categories in advance if you want to focus on specific themes.

The result is a structured view of what employees said, organised by theme, with the full text of the original responses available at every step. Every theme can be traced back to the specific quotes that support it. You can see which employees mentioned a particular concern, how frequently it came up, and how it was expressed in their own words.

You can filter and compare by any variable, for instance comparing responses from different departments, seniority levels, or tenure groups, to understand where patterns are consistent and where they diverge. This kind of cross-cutting analysis, which used to require weeks of manual work, is described in more detail in our guide to discovering themes using metadata variables.

The analysis is also fully editable. You can modify any AI-generated coding, merge or split categories as your understanding of the data develops, and add manual observations. Our article on combining AI analysis with manual workflows explains how this works in practice.

Producing outputs. From the analysis, you can generate a written report, a slide deck for an executive presentation, or an Excel export for further work. Reports include the themes identified, the evidence behind each one, and selected verbatim quotes that bring the findings to life. The kind of material that a senior leader or board can engage with, not just acknowledge. See more about Skimle's export formats and how they fit into different workflows.

What this means for the quality of HR insights

The practical implication is significant. An organisation that previously had to choose between broad quantitative data and deep qualitative understanding no longer faces that trade-off.

Consider a company running a post-restructuring people survey. Traditionally, this might be a 40-question quantitative survey, followed several months later by a series of focus groups if the results raise enough concern. The survey produces scores. The focus groups produce themes. By the time the themes are understood and acted on, the organisation has moved on to the next challenge.

With Skimle Ask, the same project can combine both: first a few Likert-style quantitative questions to give a baseline for tracking and comparison, followed by deep qualitative questions. The analysis takes days rather than months. The findings arrive while the issues are still fresh and actionable.

For an HR team that needs to demonstrate the value of people data to the business, this matters. Executives who have grown sceptical of engagement scores that never quite connect to anything concrete are much more likely to engage with findings that tell them specifically what is causing a problem and where it is concentrated. "Engagement in the operations division is being driven by frustration with unclear role boundaries following the reorganisation, particularly among team leads in logistics" is a finding you can act on. "Engagement is down 0.4 points" is not.

Beyond the annual survey: other uses for AI-assisted HR research

Once qualitative research is practical to run at scale, it opens up a range of uses that previously would not have been cost-effective.

  • Team pulse checks. Rather than waiting for the annual survey cycle, a team leader or HR business partner can run a quick qualitative pulse check on a specific topic: how a recent change has landed, how the team feels about a new way of working, or what is energising or frustrating people right now. The whole thing can be set up and run within a day.

  • Exit interviews. Traditional exit interviews are logistically difficult and the resulting data is rarely analysed systematically. With an AI interviewer, every departing employee can complete a structured exit interview at their convenience, and the responses can be analysed across all leavers to identify patterns: which teams are most affected, what themes come up most often, and whether the issues vary by seniority or function. For organisations that want to understand and reduce attrition, this is considerably more useful than the occasional conversation that gets recorded in a spreadsheet somewhere.

  • Organisational design and change projects. Consultants and HR teams leading transformation work often need to understand quickly what employees think about a proposed change, how ready they feel for it, and what concerns need to be addressed. Running structured qualitative interviews at scale is exactly what this requires. The kind of work described in our guide on how to conduct effective business interviews can now be done far more efficiently. Skimle is also purpose-built for this kind of consultant and organisational development work.

  • Onboarding feedback. Understanding the experience of new joiners in their first 90 days, in their own words rather than through a rating scale, gives organisations much more useful information for improving their onboarding programmes and identifying early warning signs of disengagement.

  • Idea generation. When an organisation wants genuine input from employees, from how to improve a process to what to do for a company event, an open conversational format produces better responses than a multiple-choice form. People give more considered answers when they feel the question is genuinely open and they have space to explain their thinking.

  • 360-degree and upward feedback. Structured qualitative feedback from colleagues and direct reports, at scale, has been largely impractical to collect and analyse manually. With AI interviewing and systematic analysis, it becomes feasible as a regular part of performance and development processes.

In each of these cases, the core logic is the same: people have more to say than a rating scale can capture, and now there are tools that make it practical to hear it.

The credibility question

One concern that often comes up is whether AI-conducted interviews can produce the same quality of insight as human-led ones. This deserves a direct answer.

For some situations, a skilled human interviewer will always produce richer results. An experienced qualitative researcher or consultant brings judgement, empathy, and the ability to read between the lines in ways that an AI cannot fully replicate. For sensitive conversations, for complex one-off projects, or where the relationship between interviewer and respondent matters, human interviewers remain the better choice.

But for the large-scale, structured data collection that HR teams typically need, AI interviewing is a genuine step forward. The AI is consistent across respondents. It does not unconsciously vary its probing based on the seniority of the person it is talking to. It conducts the 300th interview with the same attention as the first. And because every response is recorded verbatim and every theme traces back to specific quotes, the analysis is more transparent than most human-led qualitative work ever manages to be.

The approach Skimle takes is built on the principle of two-way transparency: every insight connects back to a specific quote, and every quote connects back to the context in which it was given. That makes the findings auditable in a way that a summary written by a consultant after a series of interviews simply cannot be.

AI does not replace good qualitative research. What it does is make the kind of thorough, transparent, analysable qualitative data collection that has been the gold standard in research finally accessible to organisations that previously could not afford to do it at scale.

From scores to understanding

The annual engagement survey is not going away. Quantitative tracking has genuine value, and comparison over time is important. But for organisations that want to move from knowing what the scores are to understanding what is driving them and what to do about it, qualitative research at scale is now a practical option.

The organisations that make that shift will be better placed to act on what they learn from their people, and to demonstrate to the business that HR insights are worth paying attention to.


Ready to try AI-assisted qualitative research with your employees? Start with Skimle for free and see how it feels to gather rich qualitative insights at the scale and speed of a survey.

Want to understand what Skimle actually does differently? Read our guide on how Skimle's end-to-end workflow handles qualitative data, how our manual editing tools let you retain full methodological control, and how metadata analysis surfaces patterns you would otherwise miss.


About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. His research focuses on organisational strategy, innovation, and qualitative methodology. Google Scholar profile

Olli Salo is a co-founder at Skimle and former Partner at McKinsey & Company where he spent 18 years helping clients understand the markets and themselves, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile


Sources