Skip to content
Back to Insights
AI Strategy

Why 88% of CEOs Aren't Winning with AI — And What the 12% Do Differently

March 2026

According to PwC's 29th Global CEO Survey, published in January 2026, only 12% of CEOs say AI has delivered both cost and revenue benefits. A further 56% report no significant financial benefit whatsoever.

Read that again. More than half of business leaders who have invested in AI have nothing to show for it.

This is not a fringe finding. PwC surveyed nearly 5,000 CEOs across 109 countries. The Conference Board's 2026 C-Suite Outlook found 41% of executives now name ROI measurement as their top AI priority, ahead of strategy, ahead of hiring, ahead of everything else. MIT's GenAI Divide research puts the enterprise failure rate at 95%, defined as no measurable financial return within six months of deployment.

The AI experiment is running. The returns are not.

But 12% of businesses are winning. Genuinely. Measurably. Across both cost and revenue. The question is not whether AI delivers. It clearly does for some. The question is what those businesses are doing that the rest are not.

The Problem Is Not the Technology

Before diagnosing what the 12% do differently, it is worth being precise about why the majority fail. The instinct is to blame the tools: hallucinating models, integration complexity, the pace of change. That instinct is wrong.

The businesses that are failing are not failing because the AI is not capable enough. They are failing for three reasons that have nothing to do with technology.

01

They Measure Activity, Not Outcomes

Most businesses evaluate AI on the wrong metrics. Teams report hours saved, documents summarised, emails drafted faster. These numbers look good in a steering committee. They are almost meaningless as business measures.

Workday's January 2026 research of over 3,200 employees illustrates the problem precisely. Eighty-five percent of employees report saving one to seven hours per week using AI. Encouraging, until you examine what happens next. Forty percent of those gains are immediately lost to rework: correcting AI errors, verifying outputs, rewriting content that wasn't fit for purpose. The productivity gain exists on paper. It does not exist in business performance.

This is what some researchers now call the AI tax on productivity. You invest in efficiency. You get overhead in return.

The underlying failure is measurement design. If you define success as “teams using AI tools,” you will achieve it. You will also achieve nothing commercially useful. AI that is not connected to business KPIs from day one produces activity, not value.

02

They Deploy Technology Before Designing Process

The standard AI adoption pattern runs something like this: a senior leader sees a compelling demonstration, approves a tool purchase, asks IT to deploy it, and instructs teams to use it. Three months later, adoption is patchy, results are unclear, and the project quietly loses momentum.

The missing step is workflow redesign. Technology does not improve a broken process. It accelerates it.

The businesses failing at AI have largely done one of two things: bolted AI onto existing workflows without changing them, or treated AI as an isolated efficiency tool rather than an integrated part of how decisions get made and work gets done. In both cases, the return is minimal because the system around the AI has not changed.

Capability without integration is a demo, not a deployment.

03

They Treat AI as an Experiment Indefinitely

Pilots serve a purpose. They answer the question: can this technology work here? That question should have a time limit.

The majority of businesses have been running AI pilots since 2023 or 2024. Many are still running them. The pilot never graduates to a permanent operating capability because no one has defined what graduation looks like. There is no explicit transition from “we are testing this” to “this is now how we work.”

This is not a technology problem. It is a governance problem. Without agreed success criteria, a named decision-maker, and a deadline, pilots persist as cover for indecision.

What the 12% Do Differently

The businesses achieving durable returns from AI share three characteristics. None of them are about which tools they chose.

01

They Define “Winning” Before They Deploy Anything

The 12% begin with an outcome, not a technology. Before selecting a tool or commissioning a pilot, they identify a specific business process, attach it to a measurable commercial outcome, and define what success looks like in financial terms: revenue impact, margin improvement, cost reduction, each expressed as a number with a timeframe.

This means AI investments are evaluated the same way any capital allocation is evaluated: did it deliver the outcome we committed to? If the answer is no, the investment is redesigned or stopped. If yes, it is scaled.

The discipline is not sophisticated. It is simply the discipline applied to every other business decision, applied to AI.

02

They Redesign Workflows, Not Just Augment Them

Winning businesses do not ask “how can AI help my team do what they currently do?” They ask “if AI is now part of this process, how should the process be redesigned?”

That is a materially different question. The first produces marginal efficiency. The second produces structural improvement.

A financial analyst in one of these businesses might spend 20% of their time in 2026 on tasks that consumed 80% of their time in 2023. AI has not made them slightly faster at data processing. It has changed the nature of the role. The value they now deliver is in interpretation, strategic recommendation, and client judgement. The AI handles the inputs.

This level of redesign requires process thinking before technology deployment. It also requires change management. The people in those roles need to understand what is changing and why. Businesses that skip this step are the ones finding 40% of their AI time savings disappearing to rework.

03

They Treat AI as an Operating Capability, Not a Collection of Experiments

Cisco's research with Omdia found that 80% of executives believe their company's survival will depend on agentic AI by 2027. The businesses positioned to act on that are not the ones still running pilots. They are the ones that have already moved through the experimental phase, established governance, and made AI a permanent part of how they operate.

The distinction the 12% have made is treating AI investment with the same organisational seriousness as any other operating infrastructure. It is not a project. It is not owned by the technology team. It is a capability embedded across the business, with clear ownership, measurement, and accountability at executive level.

Why This Is Harder in the Mid-Market, and Why You Have the Advantage

Mid-market businesses face specific pressures that enterprise does not. There is no dedicated AI team. Budget is constrained. Whoever is leading AI implementation is also running their core function. The vendor landscape is overwhelming. Over 10,000 AI products now exist, with no independent framework for evaluating them.

The result is that mid-market businesses often end up with a collection of tools, no coherent strategy, and ROI that is impossible to attribute.

But the mid-market also has structural advantages that enterprise cannot replicate. Decision cycles are faster. There is less legacy technology to work around. Teams are smaller, which means change management is genuinely achievable rather than a multi-year programme. And the competitive gap is real. A mid-market business that gets AI right in 2026 moves ahead of peers who are still experimenting.

The window for first-mover advantage in mid-market AI is narrowing. But it has not closed.

What to Do Next

If you are reading this and recognise your business in the 88%, the starting point is not another tool. It is a clear answer to three questions:

Which specific business outcome are you trying to improve, expressed as a number?

Not “improve efficiency”: revenue per head, cost per transaction, customer retention rate.

Which process directly influences that outcome?

Not “operations broadly.” The specific workflow where AI intervention would change the result.

What does success look like in 90 days, and who is accountable for delivering it?

If those three questions have clear, agreed answers, you have the foundation for AI investment that goes somewhere. If they do not, more technology will not help.

The 12% are not better resourced. They are better organised. That is entirely within reach.

The Albison Group works with mid-market businesses that are serious about turning AI investment into business performance. If this article raised questions about your current AI approach, start here.