How to win with AI in 2026: pointers for senior executives

The AI landscape is constantly evolving. Here's what CEOs and senior executives need to focus on in 2026 to prepare their organizations for the years ahead.

Cover Image for How to win with AI in 2026: pointers for senior executives

The Rabbit problem

Two farmers on an island had finished their work for the day and were chatting on the porch, as they often did to pass their time in the evening. "Have you noticed the pair of rabbits in the forest?" asked one. The other replied "Yes, saw them too. Looked cute!", and then both went home.

Next day, there were four rabbits peeking from the forest. And then eight. The first farmer, the more cautious one, started to notice signs of nibbling in the orchard and tracks in the field. "Do you reckon we should do something about this?" he asked the more carefree farmer. "Don't think so, it's just a few rabbits" was the pragmatic reply. That night, they saw sixteen rabbits in total.

At 32 rabbits, the cautious farmer bought supplies and started to build a fence around his farm. It was hard work on top of all the other work of running the farm, and money was tight. The carefree farmer meanwhile was thriving and spent his days in a mix of leisure and planting more apple trees to secure an even better harvest next season. The 64 rabbits in the forest were not really a problem.

A few days later there were 128 rabbits running around the island...

Are the rabbits coming?

The big question in AI right now is how this fable continues.

Will the "summer of love vibes of AI" continue and help the rabbits multiply? Will we reach ever greater levels of autonomy and general intelligence, with agentic AI systems able to not just beat us in individual benchmarks, but also to do useful tasks in real life. Will the challenges in understanding context, developing a model of the world, having accessible memory, interacting with digital and physical systems and so on be overcome with the same exponential speed we saw in the development of core large language models capabilities. Will we keep scaling compute, improving algorithms and "unhobbling" the systems so we keep improving multiple Orders of Magnitude every year, and reach AGI in 2027 as predicted in the famous Situational Awareness essay from 2024?

Or was this it and we're actually in the largest ever hype and stock market bubble with a "cold and dark AI winter" soon on our doorsteps? Will people start to challenge if the AI outputs are actually useful and get increasingly concerned about AI slop? Will organisations start to run out of scaleable use cases where AI could actually deliver significant value? Will AI agents prove to be too naive and volatile to be trusted with anything serious? Will all of this slowing down then feed into an investor panic, reducing investments to AI across the value chain as market caps drop rapidly?

When writing this in early 2026, the overall feeling seems to be that while AI is advancing rapidly, the current paradigm of Large Language Models is not bringing the radical world-changing innovations many were hoping. AI agents seem uncomfortably close to the boring 1990s Robotic Process Automation tools, even if they are more versatile. You could call this "AI autumn" - not chilly yet, but certainly less hype than before. The current reality is that while there are some areas where AI already has had proven impact (most notably in software engineering), the overall actual impact on the economy and society has been limited. Naturally this view is also being challenged, with some even calling it AI denialism and arguing that anyone not in awe of the pace of AI improvements still happening is simply in the "denial stage of grief" for humans losing their uniqueness.

So the jury is still out there when it comes to the rabbit problem.

But as a senior executive or CEO, you can't wait for the jury to come back in - you face a choice on what you do right now.

  • Should you take the route of the carefree farmer, who correctly assesses the current situation: still only 256 rabbits spread around a large island, not really an issue beyond some clear marks of nibbling in the carrots.
  • Or should you be more cautious like the first farmer, and start to prepare already now for what could rapidly happen if the trend continues?

Managing that tension is a calibration game: how fast, how much, and when. Calibration itself is hardly new; what is new is the fog around what AI and agents are capable of, how their maturity curve and "impact horizon" unfolds, and which enterprise capabilities must wrap around them. Is this a false alarm and are we just seeing another hype cycle, or are the rabbits coming for real?

I argue that in 2026 the best course of action for senior executives is to take an overall stance of pragmatic optimism. The stock market might be in the middle of a huge bubble, and AI progress might grind to a halt over the coming year. However, even the tools we already have available enable significant value creation. Make sure to tap into what is possible today, and then consider anything new and shiny as additional bonus. Those who take a pessimistic stance in 2026 take a huge risk. The technology won't stand still, and company-wide change takes time. You need to start preparing now for what's coming in 2-5 years.

In short, start eating the rabbits today AND build the fence for tomorrow. What to do in practice? Here are the five pointers for senior executives on how to win with AI in 2026:

Pointer 1: Make sure your organisation is at the starting line

If you go a bit broad brush on human history, you could identify a few distinct eras for technology and accompanying operating models

  • Artisan and agricultural era (prior to 1800s). Small teams of farm workers or artisans producing standard outputs with slowly changing methods handed down from master to apprentice.
  • Industrial era (1800s to 2000s). Large functional bureaucracies separating frontline from management and engineering, using rigid waterfall planning cycles to mass-produce goods.
  • Digital era (early 2000s). Flat organisations with cross-functional teams of knowledge workers aligned to outcomes (e.g., product), using iterative agile models to create digital experiences.

These operating models have evolved in sync with changes in technology and culture. When tools were rare, humans could only harness their own bodies and energy for energy, and societies were mainly local, the farm or workshop based operating model made sense. Once the industrial revolution enabled mass-production and harnessing mechanical and chemical energy, it made sense to organize around the large capital intensive production assets like factories.

The industrial organization was built around a top-down hierarchy separating the people responsible for changing and managing the production process from those in the front line. To maximize efficiency, detailed instructions are transmitted in a top-down hierarchy, with different functions in their own line organizations. Information Technology was a separate function, responsible for storing records and helping enterprise resource planning by maintaining backend systems. It first operated with punch cards before then being upgraded to modern ERP systems.

When faced with the digital age, industrial organizations struggled to cope. While they could bring in digital tools helping the frontline, and off-the-shelf enterprise IT tools (like Outlook) for collaboration, the industrial operating model paradigm falls short in terms of the core value creation logic. In the digital era value is created through rapidly developing and iterating digital products and automated processes instead of manual work in for example the branches or at the assembly line. Faced with digitalization, the first instinct of industrial era companies was to follow the factory playbook and attempt to buy or build digital solutions as large one-off projects (like you would invest into a new factory), but that proved too slow, complex and prone to failures as shown by all the failed IT megaprojects

More nimble digital organizations realized they need to rethink their entire value creation logic around the new era of digital technology. Those who just took their manual forms and put them as printable digital files on the company website lost against digital natives who built streamlined digital-first products and experiences. Companies like Facebook did not simply digitize a college yearbook, they created a whole new industry and sources of revenue. This required embracing a product mindset and structuring the entire change-organization around customer journeys, products and platforms. Having small, empowered cross-functional teams with end-to-end goals allowed constant iteration and speed, which is key to winning in the digital era as I wrote in my McKinsey "Impact of Agility"-article. Enabling these agile teams in turn called for deeper changes to the backbone of structure, processes, talent and culture - for example putting in place a Quarterly Business Review cycle instead of annual rigid budgeting, flattening the organization to 3-5 layers instead of 7+, increasing the share of "doers" at the expense of "talkers", purposefully changing the culture to emphasise entrepreneurial mindsets and accountability, and so on.

Today there is a proliferation of AI tools available for companies. The simplest ones enable augmenting individual capabilities - like basic ChatGPT for brainstorming or dedicated tools for note-taking in meetings. These personal tools are not without risks, but their widespread adoption over the past few years has shown they are relatively easy to incorporate into any type of organization. Likewise, companies are finding it easy to deploy ready AI solutions from their existing SaaS providers (e.g., Salesforce) or from dedicated AI companies like our own Skimle for qualitative analysis tasks.

These tools can be used by any type of organization, but the application and impact potential vary.

  • Industrial paradigm companies end up using AI tools to improve cumbersome coordination processes between silos (e.g., automated meeting notes, faster email drafting) or to automate functional processes (e.g., HR or sales), but struggle to apply AI in cross-functional business processes and products. This might explain the frustration and lack of tangible impact from AI, with reports from e.g., MIT claiming 95% of AI pilots are not showing any ROI.

  • Digital companies with an agile and value-centric operating model on the other hand can augment their existing cross-functional teams with AI to power their processes, products and customer journeys. Since the digital & agile operating model is structured around units responsible for real customer outcomes, there is a natural owner for turning the promise of AI into reality. The fact that agile companies can use a hybrid setup (for example chapters and squads -model where AI is both centralized (in terms of building capabilities and governance) and localised (in terms of value creation and use cases) enables them to develop AI in a way that is both fast AND safe.

If the AI systems we see today would be the pinnacle of AI development, there would be no real need to call for a new operating model paradigm beyond digital. Today individuals have access to high productivity tools like Skimle, and teams can develop more versatile products using AI-augmented tools like Claude Code, but fundamentally the way of organizing would not need to change. Despite people talking about agents and showing how they “replaced the entire marketing department with ten agents”, you can see those more about hype for better tools rather than something fundamentally changing. The humans are tightly in control and clearly accountable for the outcomes of each task.

But the technology around AI agents is evolving rapidly towards AI systems with more and more independence. This enables taking over full domains with agentic processes, for example replacing customer servicing with a ground-up AI-first approach with multiple interacting agents serving all customer needs across channels. Once agents can operate independently for an entire workday and learn, they can easily be deployed as bespoke frontline agents or even to work side by side with humans in teams responsible for management and driving change in the organization. In that kind of an agentic organisation, one would expect for example

  • Even flatter organisation with smaller cross-functional teams of business and technical experts owning entire domains of products, segments or journeys - for example having a group of 20 people responsible for the mortgage product in a bank instead of 100s of people in dozens of teams
  • Ownership of hardcoded guardrails and agentic skills by each function (chapter), while deployment and configuration owned by the teams - imagine e.g., a compulsory "testing bot" owned by Quality function and using standard approach to test code produced by all teams
  • Turbocharged speed of development (e.g., "vibe coding") resulting to turbocharged governance cycles for prioritisation and resourcing - could be e.g., biweekly cycles instead of Quarterly Business Reviews to ensure slow governance is not holding the company back
  • Expecting knowledge workers to develop agentic supervision and orchestration skills including maintaining a high bar for judging AI system output - translating to a world where even entry level humans are managing teams of dozens of agents

All of that sounds fantastic, even if a bit sci-fi.

The reality is less fancy. I've recently had multiple CEO discussions culminating in the realization that their company is actually a few eras behind the upcoming AI era... the biggest challenge for them is to actually rapidly move from the industrial era organisation (5-12 layers and heavy silos, annual planning cycles, fixed roles and annual target setting, slow moving culture where customers are often forgotten, ...) into an agile operating model (network of high-performing cross-functional teams, quarterly prioritisation and weekly delivery of value, modern people model fostering intrinsic motivation and growth, can-do culture, ...). Unless they are able to orient themselves around value and have teams able to deploy customer facing changes rapidly, there will be no chance to turn AI into business outcomes. They will be stuck with huge change management exercises where the pinnacle is to get people to use AI to craft beautiful emails to people from other departmental silos (who in turn use AI to then summarize them...)

If your company is still working in the industrial era, or you are not happy with your digital & agile operating model, 2026 is the year to first panic a bit, and then fix it!

Pointer 2: Put many eggs into few baskets

If 2025 was the year of broad education and scanning for use case ideas across the company, the year 2026 should be about selecting a few domains and doubling down to turn ideas into value. As a CEO or senior executive, you should be able to understand in depth how AI is applied in those specific areas to produce value.

Practically this means that instead of 50 pilot projects each with sub-20k budgets, invest real money and attention in 3-5 "lighthouse" initiatives that:

  • Address high-value business problems (real topline opportunities, or significant cost efficiencies in existing processes)
  • Have executive sponsorship and dedicated teams
  • Include the trio of technology, organisation and business changes, hand-in-hand
  • Have clear success metrics tied to critical business outcomes and competitive advantage - real "skin in the game" not hobbies
  • Can scale horizontally to other parts of the business, at least in some aspect

Examples of focused domains:

  • AI-first customer servicing: Replace entire call center operation for some segment or product with ground-up AI design (not just adding chatbots to existing processes)
  • Automated expert analysis: Transform how your consultants or analysts synthesize large volumes of information using AI-native tools and workflows for qualitative analysis
  • AI-augmented software engineering: Increase developer productivity by 2-10x using tools like Claude Code for developing and testing code.
  • Agentic sales operations: Let AI agents handle prospecting, qualification, and initial relationship building

The key is depth over breadth. Most CEOs can't speak intelligently about their AI strategy because they have 50 shallow initiatives instead of 5 deep ones. They end up giving philosophical presentations about the potential of AI (with slides made by Nano Banana), quoting figures about the share of AI-enabled employees (read: have logged once on the expensive enterprise AI chatbot) and desperately hoping their employees don't figure out how clueless they are (their kids already have).

Taking the "deep and narrow" route is different. As you start to go deep into these lighthouses, you will discover most are actually difficult to execute in practice. You will face thorny obstacles in terms of technology (for example, agentic AI systems being too naive or stochastic for real life use cases), customer expectations (e.g., hesitancy of customers to accept AI judgement or outputs), business logic (e.g., high costs for AI processing without a way to recoup them; or competitors eating away your moat more rapidly than ever), internal adoption (e.g., resistance to changing ways of working) and so on. As you overcome these obstacles, you will start to learn as an organisation what works and what does not.

2026 is not yet about where you are in terms of AI, or even your velocity of developing. It's about how fast your company can accelerate. This means picking a few areas and using them to learn.

Pointer 3: Human and AI together is the recipe for 2026

There is no shortage of maturity models when it comes to the capabilities of AI. To simplify and generalize across models, consider the five levels of

  • Level 1: Individual augmentation. Tools that augment individual tasks (e.g., meeting note taker, brainstorming chatbot). Large productivity gains for specific tasks, limited overall organizational impact.
  • Level 2: Workflow automation. AI systems that can handle complete workflows (e.g., analysis of expert input for due diligence to produce report). Can boost speed 10x in applicable areas while improving quality.
  • Level 3: Domain specific agents. AI systems built for specific business domains using SaaS solutions or bespoke applications (e.g., AI-first customer servicing platforms). Can cut majority of current costs to operate the process.
  • Level 4: Cross-functional agents. AI agents working side-by-side with humans on complex cross-domain workflows with high-levels of decisioning capabilities. Dramatically reduce the costs and speed up change, but require deep changes to the operating model.
  • Level 5: Autonomous generic agents. AI systems that can configure themselves to new domains through observation and learning, potentially combined with physical robotics. "Zero-cost labor" with wide applicability across long tail of unique workflows - unlock the "1 BEUR company with just one employee" dream.

Each of these levels unlocks massive opportunities in terms of cost reduction or value creation. The core of the "rabbit problem" is to be able to forecast and rapidly follow the frontier in terms of what is possible. At the moment, I would argue that Level 1 is proven, with AI systems outperforming human benchmarks in a variety of fields - if not yet today, then in a few weeks there will be a model that bypasses even expert PhD level performance in specific testing suites. However, the hard work is yet underway when it comes to the more complex requirements of Levels 2 to 5. There are promising tools like Lovable for coding application or Legora for legal documents, and broad tools like Claude Code for software engineering and beyond, but the race is still on to improve them to meet the high quality bar required for real-life use.

In 2025 people started to increasingly notice that when AI produces outputs, they are "almost right" – seemingly correct at first glance, but often flawed in some ways. The challenge is that AI systems have a "spiky profile" of capabilities that throws us off. They're eloquent writers, fast task completers, and broadly knowledgeable. But they're also naive, overconfident, and forgetful in ways that smart humans would never be. Having "80% correct" output causes humans to distrust the systems and can be too risky for companies to stand behind.

This means that using AI to do things "good enough" at 80% quality but 10x faster is the wrong approach. This creates "AI slop" that erodes trust and brand value. We saw this with Deloitte's AI-generated report for the Australian government where they had to refund $290,000 and suffered significant brand damage for their attempt to do "vibe consulting".

The right approach for AI systems in 2026: Using AI and humans together to produce something at "200% quality AND 5x faster" than possible before. This means:

  • AI handles the heavy lifting (processing large volumes, pattern recognition, first drafts)
  • Humans provide judgment, verification, and strategic direction
  • Systems are designed for transparency and traceability
  • Quality control is built into the workflow, not bolted on at the end

It also means ensuring the workflows are deterministic for those aspects where they need to be and that the AI tools are only given appropriate levels of power and autonomy - instead of trying to "jump levels" in terms of maturity. If you have a naive and unsafe AI agent, do not pretend it's a Level 4 or 5 system and give it keys to the entire company. If you do this, you will quickly run out of money like Wall Street Journal discovered when they experimented with having Claude AI run their office vending machine.

Pointer 4: Anchor on value creation not vanity metrics

My fourth pointer, perhaps the most actionable one, is about measurement and target setting. If you're measuring things like:

  • "% of people using AI tools"
  • "# of use cases identified"
  • "$ spent on AI investments"
  • "# of AI models deployed"

...you're measuring the wrong things. These are activity metrics, not value metrics. They are easy to game, tell little about the actual maturity and can cause the organisation to celebrate their progress even if in reality they are not getting any benefits from AI and not even on the path towards doing so.

Smarter executives should put the targets on real value. They should spend time hands-on with the individual use cases in different units, and obsess on metrics like the ones below.

  • Value delivered from AI use cases (finance-validated run rate impact)
  • Time saved on specific tasks (in real life not on paper)
  • Quality improvements (error rates, customer satisfaction, churn propensity, rework intensity etc.)
  • Cost per transaction or output, adjusted for quality costs above
  • Employee satisfaction and perceptions on improved customer and efficiency outcomes
  • Revenue impact (new products, faster time-to-market of iterations)
  • Cost savings (FTE reduction, process efficiency - again with real traceability to AI)
  • Customer outcomes (NPS, retention, lifetime value)
  • Operating leverage (e.g., revenue per employee)

The key is connecting AI initiatives to business outcomes, not just technology deployment. This requires treating AI investments like any other business investment: with clear hypotheses, success metrics, and regular review against outcomes.

In 2026 you will likely not move the needle on the big metrics across the enterprise (e.g., operating leverage) - instead set targets applicable for narrow slices of the business and with an emphasis towards early learning and scaling. Can we look at the metrics and conclude that we are on the path towards capturing real value from AI, and moving in an accelerated way?

Pointer 5: Be ready to move fast as technology matures

AI is a rapidly moving field and the final pointer for this 2026 list is to stay at the frontier. As a pragmatic optimist, you know there is plenty of value to be captured from all the things already possible to do, but equally an upside for new sources of AI value if technology keeps improving at the pace the "full-on AI summer optimists" are envisioning.

This means you want to build an agenda balancing preparedness for tomorrow as well as practical value capture today. For example, you could consider

  • "Drink your own champagne": Spend serious time learning tools like Claude Code (agentic system for not only coding but also e.g., writing emails based on accessing your filesystem), Deep research (go beyond simple ChatGPT queries to have the AI conduct real search and result summarization), Lovable (prototyping applications using prompting) and Skimle (summarise documents and surface themes across them).

  • Fully embrace Level 1: Make standard AI tools available and encourage "Bring Your Own AI" from a pre-approved list. Expect productivity gains of 50% in front-end software engineering.

  • Define the value agenda: Bring together bold thinkers to work on a disruptive AI strategy. Look 2-3 years ahead at how the company could look with domain-specific agents (Levels 3-4) or even generic agents (Level 5) creating "zero-cost labor."

  • Organize to value: Adopt a more agile operating model with structure aligned to customer value, faster workflows, and future-ready talent. This is a no-regret move critical to get settled before the pace of change accelerates.

  • Launch agentic lighthouses: Put together your best internal talent and partners to tackle the highest-value use cases. You might e.g., build and deploy dozens of micro-agents on top of existing tech platforms or in other areas replace entire legacy software stacks with AI-first approaches using open-source technology stacks.

  • Prepare tech and data for scaling: Discover where architecture, data structures and quality are holding you back; accelerate data transformation. Clarify decision-making rights, interfaces, and processes to enable automation.

  • "Agentic organisation": Start organizing the agents themselves. Which agents are needed? Where do we deploy teams of agents vs. individual agents? What decision-making rights does each agent get? What goals do we set? How do we manage performance? This mirrors classic organizational questions but applied to a hybrid human-agent workforce.

  • Upskill for managing agents: Employees shift from doing tasks to managing agents. Incorporate "agent leadership" into performance reviews.

This means you are prepared for 2027

The technology landscape continues evolving. If we're in the exponential curve scenario with the rabbits, 2027 is the year when we would see Level 5 capabilities emerge: generic agents that can learn new domains through observation, potentially combined with humanoid robotics. And in which the earlier Levels 2 and 4 should be proven across domains.

You're prepared because you did the hard work in 2026:

  • Your operating model is agile and value-focused
  • Your teams know how to work with AI agents
  • Your data and systems are structured for automation
  • Your culture embraces experimentation and learning
  • Your governance frameworks are clear and tested

While laggards are just noticing they should stop patting themselves on the back for having done "AI enablement workshops for 80% of employees" and start panicking, you're capturing compound benefits and opening up entirely new business models.

The Choice before you

The rabbit fable at the beginning of this article isn't just about AI, it's about exponential change and how leaders respond to it. The carefree farmer saw the facts as they stood and optimized for the present. The cautious farmer saw the trend line and prepared for the future.

We don't know for certain which farmer's approach is right for AI in 2026. The exponential growth could continue, plateau, or even reverse if for example token costs start to creep up. The rabbits might overrun the island, or they might stay manageable.

But here's what we do know:

  • The cost of being wrong is asymmetric: If you prepare and AI doesn't accelerate, you still get benefits from better operating models, proven Level 1-2 capabilities, and organizational agility. If you don't prepare and AI does accelerate, you face existential risk.

  • Organizational change takes longer than technological change: Even if it takes five years before anything radically new emerges from AI, companies need to start preparing in 2026. The bottleneck isn't technology; it's adoption capability.

  • The leaders are pulling away: Companies that treat AI strategically are already seeing productivity gains in specific domains. Those who treat it as "IT's problem" are falling behind.

Thus my advice to take the stance of pragmatic optimism. Focus execution on proven capabilities (Levels 1-2) while preparing your organization for what might come (Levels 3-5). Invest deeply in a few domains rather than broadly in many. Prioritize quality over speed. Measure value over activity.

Start building your fence now while also developing an appetite for rabbit meat. The rabbits might not come. But if they do, you'll be glad you prepared.


About the author: Olli Salo is co-founder of Skimle, an AI-native qualitative analysis platform. He spent 18 years as a Partner at McKinsey & Company, where he co-authored research on organizational transformation in the agentic age including The Change Agent: Goals, Decisions, and Implications for CEOs in the Agentic Age and contributed to other articles like The agentic organization: Contours of the next paradigm for the AI era from which this blog post draws from. Olli left McKinsey in November 2025 to hands-on build an AI-native company. You can connect with him on LinkedIn.