This isn't a market report. It's what we heard — directly — from the people responsible for making GenAI work inside large enterprises, cross-referenced against the best external research available.
The pattern is consistent across our roundtables, client engagements, and the data from McKinsey, Gartner, and BCG: the technology works. The organizations don't — yet.
The Uncomfortable Numbers
The gap between those numbers tells the story: nearly everyone is experimenting, hardly anyone has redesigned how they work, and trillions of dollars in value remain uncaptured. HBR's framing is blunt: "Most AI initiatives fail not because the models are weak, but because organizations aren't built to sustain them."
At a closed-door session in January 2026 — 12 enterprise AI leaders in the room — we asked: "What killed your last AI initiative?" Not a single person said technology. The answers: no named owner, legal blocked it, "we're waiting for the platform team," and the most honest: "the person who championed it left."
This maps exactly to HBR's Nov 2025 finding that AI projects fail because organizations lack the "roles, responsibilities, and routines" to sustain them — not because models underperform.
The 70–80% Accuracy Trap
Every company we work with hits the same wall. The AI works well enough to demo. Leadership gets excited. Then it goes to real users — and they stop trusting it within a week.
70-80% accuracy is the uncanny valley of enterprise AI. Right often enough that you start relying on it. Wrong often enough that you get burned. Users revert to the old way, and the pilot quietly dies.
HBR, Feb 2026 adds another layer: AI adoption stalls because employees' anxiety about relevance, identity, and job security drives "surface-level adoption" — they use the tool but don't change the process. Accuracy problems give them the excuse they needed to stop.
"Most stalled projects don't fail due to poor results. They fail because people want to keep their authority over final decisions — even when AI would produce better outcomes."
But here's the flip side: once accuracy crosses 90%, AI consistently outperforms humans at the same task. The remaining gaps usually stem from missing data — incomplete technical specs, unstructured product data — not from model limitations.
An industrial equipment manufacturer had engineers spending 3-5 hours per inspection report — writing 14-15 page technical documents manually. We built a voice-to-report system: engineers speak while they inspect, AI generates the full report in 10 minutes. Trained against thousands of documented failure modes, cross-references observations against known issues.
Over 80% of stakeholders adopted it — despite knowing the data had quality gaps — because the human-in-the-loop fallback was designed in from day one.
Where AI Lands Fastest — And Where It Doesn't
The most counterintuitive finding from our roundtables, confirmed by BCG's 2025 research: AI adoption success correlates with how process-driven the department is — not how tech-savvy. BCG found that the top 6% of AI performers ("future-built firms") redesign workflows around AI rather than bolting it onto existing ones.
Adoption Speed by Department
Based on our roundtable observations and client engagements. Process-driven departments adopt fastest because procedures, measurable outcomes, and lower ego stakes reduce resistance.
"If you want to drain a lake, don't ask the frogs."
The Implication
- Start where resistance is lowest and data is richest. Supply chain, after-sales, inspection.
- Build internal proof before touching sales or CX. You need 2-3 wins peers can reference.
- Don't mandate adoption — create pull. When one department saves 1,000 hours/year, the next one asks to be next.
Three Things That Predict Success
Gartner attributes 85% of AI project failures to poor data quality. HBR points to missing organizational structures. BCG shows that only 6% of companies are "future-built" enough to capture real AI value. Our experience condenses all of this into three binary predictors:
1. Data You Can Actually Access
Not "we have data." Can your team access real production data within two weeks? Manufacturing product data lags 15 years behind consumer systems. Companies with 60 years of trial reports think they have data. They have paper.
2. A Named Owner With Budget
One person. Not a committee. Not "shared ownership." Two failure modes: the Knife Fight (departments competing for AI budget) and the Hot Potato (nobody wants blame when the pilot doesn't scale).
3. The Willingness to Kill
Define failure before you start: "If we don't see X by week 4, we stop." Killing a POC in 6 weeks for €30-50K is a win — you just saved €500K on something that wouldn't have worked. Gartner's 30% abandonment rate would be much lower if companies killed earlier instead of extending timelines hoping things improve.
The Blockers Nobody Puts in the Deck
"People want to keep their authority over final decisions even when AI would produce better outcomes — especially in negotiations and dynamic pricing."
HBR, Feb 2026 confirms: employees' anxiety about relevance and identity drives surface-level adoption. The pricing manager who's done the job for 20 years won't let an algorithm override them — not because it's wrong, but because accepting it means admitting the last 20 years could have been more efficient.
A global chemical company wanted an AI sales tool to cover 150+ products across 15 application areas — in a POC budgeted for 20 days. This is an expectations problem, not a technology problem. And it starts in the boardroom.
Across every roundtable, one pattern was universal: IT departments are the most conservative blockers of AI innovation. Not because they're wrong about security — because their incentive structure rewards preventing problems, not enabling innovation.
The winning pattern: build the POC outside IT's gatekeeping (using cloud-based tools with proper data controls), prove it works, then bring IT in to productionize.
Three Examples of What's Working
These aren't representative of all enterprise AI. They represent specific conditions where the three predictors aligned — accessible data, a named owner, and a clear kill point. The patterns are instructive, not guaranteed.
Voice-to-Report: From 5 Hours to 10 Minutes
Service engineers inspect industrial equipment using voice. AI generates complete technical reports, cross-references observations against thousands of documented failure modes, and flags checks the engineer might have missed.
Why it worked: Narrow scope, real data from day one, clear metric (hours saved), named owner, and human-in-the-loop designed in — not bolted on after.
Natural Language Product Matching
Sales and distributor teams describe customer applications in everyday terms. AI matches against product specifications and returns best-fit recommendations with talking points. The challenge: product data only existed in unstructured PDFs — no Excel, no database.
Why it worked: Started narrow (one product family), designed for users who don't remember product names (distributors, not product managers). Champion said it "exceeded expectations" at first design review.
Conversational Product Finder — 12 Minutes of Engagement
Replaces faceted search with natural language. Users describe what they need. AI guides them through a conversation and generates personalized recommendations. Average engagement: 12-13 minutes — vs. under 30 seconds for most web sessions.
Why it worked: Answer pages, not chatbots. Users don't want back-and-forth chat — they want curated, expert-quality responses. The AI remembers context across the conversation and doesn't ask the same question twice.
From Tools to Agents: Where This Is Heading
McKinsey, 2025: 62% of organizations are experimenting with AI agents, 23% are scaling them in at least one function. BCG AI Radar, Jan 2026: AI agents already account for 17% of total AI value, projected to reach 29% by 2028. The shift is real — but most companies aren't ready for what it means.
Three Modes of AI — Most Companies Only Build One
- Reactive: User asks, AI answers. Chatbots, search, Q&A. What most companies build today.
- Proactive: AI reaches out with relevant context. "This customer hasn't reordered in 45 days — historically, that predicts churn." Not waiting to be asked.
- Ambient: Invisible, background intelligence. Auto-adjusts, learns patterns, optimizes without being asked.
The companies building shared memory across touchpoints now will have a structural advantage that compounds: their AI gets smarter with every interaction. The ones that wait will play catch-up against systems with months or years of learned context.
Four Frameworks Built From These Patterns
Red Light / Green Light
3 binary gates + 3 multipliers. From 40 AI ideas to 4 investable initiatives.
The Six Strategic Decisions
Sequential. Each unlocks the next. Most companies skip to #2 or #4.
6-Week POC-to-Profit
Proof of profit, not proof of concept. €30-75K instead of €500K.
Operating Model for GenAI
Three layers + the Domino Strategy for scaling through least resistance.
Pick the framework that fits your situation.
30 minutes. Your use cases, your org, your data. I'll map the right framework to your situation and tell you what the first move is.
Let's Look at Your Top 3 Use Cases →Or go deeper: a half-day prioritization workshop with 6-8 of your stakeholders. You leave with scored initiatives, named owners, and a 6-week blueprint. Not a PowerPoint — an executable plan.