How to analyse 360 feedback: moving from report to development priorities

A practical guide to analysing 360-degree feedback: how to extract development priorities from qualitative comments and avoid the pitfalls of generic reports.

Cover Image for How to analyse 360 feedback: moving from report to development priorities
Share this article:

To extract genuine development priorities from 360 feedback: read all qualitative comments before looking at scores, code open-text responses for recurring behavioural themes, separate high-frequency themes from outliers, and connect the patterns to the individual's role demands. When running 360s at scale across an organisation, Skimle can aggregate qualitative comments across dozens of participants and surface the themes that written reports leave buried — making it possible to identify development patterns at team or leadership level, not just individually. And if the qualitative data you are collecting is thin to begin with, Skimle Ask can replace comment boxes with structured AI interviews that probe for specific examples, producing far richer input before analysis even starts.

The standard 360 feedback report is a masterpiece of data that goes nowhere. Twelve pages of bar charts comparing self-ratings to peer ratings to manager ratings. A word cloud. A list of raw comments in random order. The recipient reads it, feels vaguely reassured or vaguely defensive, and files it. Three months later, the development conversation happens, nobody can remember what the report said, and the whole thing repeats next year.

The problem is not the data. 360 feedback, when collected well, contains genuinely useful signal about how someone shows up to the people around them. The problem is the analysis — or rather, the absence of it.

What good 360 analysis actually requires

A 360 report is not an analysis. It is a data dump. Analysis means interpreting the data, identifying what matters most, and turning observations into a prioritised development agenda.

For a single individual's 360, good analysis involves:

  1. Reading all the qualitative comments — not skimming, not just reading the positive ones, and not just reading the comments about the lowest-scoring competency.
  2. Coding comments for behavioural themes — identifying the specific behaviours described (positively or negatively), not just the competency categories the tool uses.
  3. Distinguishing signal from noise — a comment from one peer about a specific incident may or may not be representative. A theme that appears across five different raters, described in different words but pointing to the same behaviour, is strong signal.
  4. Connecting to role demands — the same behaviour can be a strength or a development area depending on the role. "Drives decisions quickly without broad consultation" is a feature if you are a startup founder and a bug if you are leading a cross-functional transformation.

For organisations running 360s at scale, there is an additional layer: aggregating themes across many participants to understand whether development needs are individual, team-level, or organisational.

Why most 360 qualitative data is thin before analysis begins

There is a problem upstream of analysis that rarely gets discussed: the qualitative data collected by standard 360 tools is often too thin to be genuinely useful.

Most platforms give each rater a small text box per competency and a character limit. Under time pressure, raters write "good communicator" or "needs to delegate more" and move on. These responses are not wrong — they are just shallow. They describe an impression rather than a behaviour, and they give the recipient nothing concrete to act on.

The richer the qualitative input, the more useful the analysis. A rater who describes a specific situation — what the person did, how it landed, what the impact was — produces data that can actually drive a development conversation. But comment boxes do not prompt for that level of detail.

Skimle Ask as a 360 data collection tool. Skimle Ask is Skimle's AI interviewer, designed to run structured conversational interviews at scale. Applied to 360 feedback, it replaces the comment box with a short AI-led conversation: the rater is asked about specific behaviours, the AI follows up on vague responses ("Can you give an example of a situation where you saw this?"), and the full conversation becomes a structured qualitative transcript.

The difference in data quality is significant. Instead of "tends to dominate meetings," you get: "In the monthly leadership team meeting, when other people raised concerns about the timeline, they were usually cut off before finishing their point — the conversation moved on before the concern was fully heard." That is a development conversation waiting to happen.

Because Skimle Ask produces structured transcripts that feed directly into Skimle's thematic analysis, the collection and analysis steps integrate naturally. Run Skimle Ask interviews with each rater, aggregate the transcripts into a project, and analyse for themes across the full rater group — including filtering by rater type (peer, direct report, stakeholder) to see whether different groups see different patterns. The always-on customer research guide covers how this kind of continuous AI interview programme works in practice, with the same principles applying to internal feedback programmes.

For organisations running 360s annually across a large leadership population, replacing comment boxes with short Skimle Ask conversations materially changes what is possible at the analysis stage — because the raw material is specific, behavioural, and structured from the start.

Reading qualitative comments first

Most people read the scores first and then look at comments to explain the scores. This is backwards.

Scores are summaries. Comments are the data. When you read scores first, you are already primed to interpret comments in terms of "why did I score low here?" rather than "what is the person trying to tell me?"

Start with every qualitative comment. Read them all in one sitting, without filtering. As you read, note the themes you are seeing — not categorising formally yet, just tracking what keeps coming up. By the time you have read everything, you will have a feel for the data that no score can give you.

This is slow for one person's 360. For an organisation running 50 or 100 360s annually, reading every comment across all participants is not feasible manually. This is where structured analysis tools become valuable. Skimle's thematic analysis can process the qualitative comments from a batch of 360 reports, surface recurring themes across participants, and identify where patterns are concentrated — by team, level, function, or any other metadata variable. See discovering themes using metadata variables for how that works.

Coding comments for behavioural themes

Behavioural themes are more specific than competency categories. "Communication" is a competency category. "Gives feedback that is specific and actionable rather than vague" or "avoids difficult conversations until they become crises" are behavioural themes. Behavioural specificity is what makes development conversations productive.

When coding comments, look for:

Frequency and consistency. A behavioural theme mentioned by four different raters, in different words, from different perspectives (peer, direct report, stakeholder) is strong evidence. A theme mentioned by one rater once might be accurate or might be idiosyncratic — it deserves attention but not the same weight.

Direction and valence. Is the theme positive (something to sustain and leverage) or developmental (something to work on)? And is the developmental theme about a skill gap (they do not know how), a habit gap (they know how but do not do it consistently), or a mindset issue (they have a belief that gets in the way)?

Specificity of incidents. Comments that reference specific situations ("in the Q3 planning process, decisions were made without consulting the people who had to implement them") are more credible than vague generalisations. They also make better development examples.

Separating signal from noise

360 feedback contains noise. Some raters are generous to everyone. Some are harsh. Some have a personal agenda. Some have only seen the person in one context. Good analysis discounts outliers and trusts convergence.

Practical tests for signal vs noise:

  • Does the theme appear in ratings from multiple rater groups (peers, direct reports, stakeholders) or only one?
  • Does the theme appear in comments from raters who seem otherwise balanced and credible?
  • Is there a pattern in who notices this theme — for example, only people the person manages, which might indicate a specific gap in the leadership relationship?
  • Does the person themselves recognise the theme, at least partially? Total blind spots exist but they are rarer than people assume.

Writing development priorities, not development suggestions

The output of 360 analysis should be two to four specific development priorities, each defined clearly enough that the person knows what behaviour to change.

A bad development priority: "Improve communication." A good development priority: "Build the habit of sharing decision rationale in real time — when a decision is made that affects the team, explain the reasoning immediately rather than expecting people to infer it."

Good development priorities are:

  • Behavioural — they describe an action, not an attribute
  • Contextualised — they specify the situations where the change matters most
  • Achievable in 12 months — not "become a visionary leader" but "run monthly retrospectives with your team"
  • Connected to impact — they explain why this behaviour matters for the person's effectiveness or career

The demystifying thematic analysis guide covers the general approach to moving from coded themes to actionable findings, which applies here.

Running 360 analysis at organisational scale

For HR and L&D teams running 360s across a leadership population, the aggregate picture is as important as the individual one. Are there development themes that appear consistently across a function? Is there a pattern in the feedback received by first-time managers versus experienced ones? Are there themes that correlate with engagement or performance data?

This kind of analysis is only possible if the qualitative 360 data is processed systematically rather than left as raw comments in individual reports. Skimle's HR and people teams features are designed for exactly this: upload the qualitative comment data from your 360 cycle, tag documents by participant level/function/tenure, and analyse for themes across the full population. If you are using Skimle Ask for data collection, the transcripts flow directly into the same project — no reformatting or manual upload required.

The output — a structured map of behavioural themes with frequency data and supporting quotes — gives L&D leaders the evidence base to design development programmes around real needs rather than assumptions.

Ready to get more from your 360 cycle? Try Skimle for free and see what your qualitative comment data is telling you.

Related reading:


About the authors

Henri Schildt is a Professor of Strategy at Aalto University School of Business and co-founder of Skimle. He has published over a dozen peer-reviewed articles using qualitative methods, including work in Academy of Management Journal, Organisation Science, and Strategic Management Journal. Google Scholar profile

Olli Salo is a former Partner at McKinsey & Company where he spent 18 years helping clients understand their markets, develop winning strategies and improve their operating models. He has done over 1000 client interviews and published over 10 articles on McKinsey.com and beyond. LinkedIn profile


Sources