MING Labs · GenAI Field Intelligence

What We Heard
in The Room

18 months of closed-door roundtables with heads of AI, CDOs, and transformation leads at Fortune 500 and Mittelstand companies across pharma, chemicals, industrial tech, financial services, energy, and medical devices. Combined with external research from McKinsey, Gartner, HBR, and BCG.
Sebastian Mueller, MING Labs April 2026 12 min read

This isn't a market report. It's what we heard — directly — from the people responsible for making GenAI work inside large enterprises, cross-referenced against the best external research available.

The pattern is consistent across our roundtables, client engagements, and the data from McKinsey, Gartner, and BCG: the technology works. The organizations don't — yet.

Part 01

The Uncomfortable Numbers

What the data says — from our rooms and from external research
30%+
of GenAI projects abandoned after POC
Gartner, 2025
40%+
of agentic AI projects will be cancelled by end of 2027
Gartner via HBR, Oct 2025
6%
of companies fully trust AI agents for core processes
HBR / BCG Survey, Dec 2025
78%
of companies now use AI — up from 55% in 2023
McKinsey State of AI, 2025
21%
have redesigned workflows for GenAI (most just bolt it on)
McKinsey, 2025
$2.6–4.4T
annual economic potential of GenAI globally
McKinsey Global Institute

The gap between those numbers tells the story: nearly everyone is experimenting, hardly anyone has redesigned how they work, and trillions of dollars in value remain uncaptured. HBR's framing is blunt: "Most AI initiatives fail not because the models are weak, but because organizations aren't built to sustain them."

From Our Roundtables

At a closed-door session in January 2026 — 12 enterprise AI leaders in the room — we asked: "What killed your last AI initiative?" Not a single person said technology. The answers: no named owner, legal blocked it, "we're waiting for the platform team," and the most honest: "the person who championed it left."

This maps exactly to HBR's Nov 2025 finding that AI projects fail because organizations lack the "roles, responsibilities, and routines" to sustain them — not because models underperform.

Closed-door roundtable, Jan 2026 (12 enterprise AI leaders — pharma, chemicals, tech, financial services) · HBR, "Most AI Initiatives Fail," Nov 2025
Part 02

The 70–80% Accuracy Trap

The gap where trust dies and projects stall

Every company we work with hits the same wall. The AI works well enough to demo. Leadership gets excited. Then it goes to real users — and they stop trusting it within a week.

70-80% accuracy is the uncanny valley of enterprise AI. Right often enough that you start relying on it. Wrong often enough that you get burned. Users revert to the old way, and the pilot quietly dies.

HBR, Feb 2026 adds another layer: AI adoption stalls because employees' anxiety about relevance, identity, and job security drives "surface-level adoption" — they use the tool but don't change the process. Accuracy problems give them the excuse they needed to stop.

"Most stalled projects don't fail due to poor results. They fail because people want to keep their authority over final decisions — even when AI would produce better outcomes."
— Roundtable participant, Head of AI at a Fortune 500 chemical company

But here's the flip side: once accuracy crosses 90%, AI consistently outperforms humans at the same task. The remaining gaps usually stem from missing data — incomplete technical specs, unstructured product data — not from model limitations.

Example: Industrial Inspection

An industrial equipment manufacturer had engineers spending 3-5 hours per inspection report — writing 14-15 page technical documents manually. We built a voice-to-report system: engineers speak while they inspect, AI generates the full report in 10 minutes. Trained against thousands of documented failure modes, cross-references observations against known issues.

Over 80% of stakeholders adopted it — despite knowing the data had quality gaps — because the human-in-the-loop fallback was designed in from day one.

Live in production. 1.5 months discovery, 2 months POC.
Part 03

Where AI Lands Fastest — And Where It Doesn't

The adoption gradient nobody talks about

The most counterintuitive finding from our roundtables, confirmed by BCG's 2025 research: AI adoption success correlates with how process-driven the department is — not how tech-savvy. BCG found that the top 6% of AI performers ("future-built firms") redesign workflows around AI rather than bolting it onto existing ones.

Adoption Speed by Department

Supply Chain
Fast
After-Sales
Fast
Quality / QA
Fast
Finance
Medium
R&D
Medium
Customer CX
Medium
Sales
Slow
Legal
Slow
Executive
Slow

Based on our roundtable observations and client engagements. Process-driven departments adopt fastest because procedures, measurable outcomes, and lower ego stakes reduce resistance.

"If you want to drain a lake, don't ask the frogs."
— Roundtable participant, on why governance functions block AI adoption

The Implication

  • Start where resistance is lowest and data is richest. Supply chain, after-sales, inspection.
  • Build internal proof before touching sales or CX. You need 2-3 wins peers can reference.
  • Don't mandate adoption — create pull. When one department saves 1,000 hours/year, the next one asks to be next.
Part 04

Three Things That Predict Success

Consistent across our engagements and external research

Gartner attributes 85% of AI project failures to poor data quality. HBR points to missing organizational structures. BCG shows that only 6% of companies are "future-built" enough to capture real AI value. Our experience condenses all of this into three binary predictors:

01
Data You Can Actually Access
Can your team access real production data within 2 weeks?
BINARY GATE
02
A Named Owner With Budget
One person. Not a committee. Not "shared ownership."
BINARY GATE
03
The Willingness to Kill
"If we don't see X by week 4, we stop." Define failure before you start.
BINARY GATE
All three present → high probability of production. Any one missing → project dies in pilot.

1. Data You Can Actually Access

Not "we have data." Can your team access real production data within two weeks? Manufacturing product data lags 15 years behind consumer systems. Companies with 60 years of trial reports think they have data. They have paper.

2. A Named Owner With Budget

One person. Not a committee. Not "shared ownership." Two failure modes: the Knife Fight (departments competing for AI budget) and the Hot Potato (nobody wants blame when the pilot doesn't scale).

3. The Willingness to Kill

Define failure before you start: "If we don't see X by week 4, we stop." Killing a POC in 6 weeks for €30-50K is a win — you just saved €500K on something that wouldn't have worked. Gartner's 30% abandonment rate would be much lower if companies killed earlier instead of extending timelines hoping things improve.

Part 05

The Blockers Nobody Puts in the Deck

What people say in closed rooms — not on conference stages
The Ego Barrier

"People want to keep their authority over final decisions even when AI would produce better outcomes — especially in negotiations and dynamic pricing."

HBR, Feb 2026 confirms: employees' anxiety about relevance and identity drives surface-level adoption. The pricing manager who's done the job for 20 years won't let an algorithm override them — not because it's wrong, but because accepting it means admitting the last 20 years could have been more efficient.

Closed-door roundtable, Jan 2026 · HBR "Why AI Adoption Stalls," Feb 2026
The Expectation Gap

A global chemical company wanted an AI sales tool to cover 150+ products across 15 application areas — in a POC budgeted for 20 days. This is an expectations problem, not a technology problem. And it starts in the boardroom.

Client engagement, 2025
The IT Immune System

Across every roundtable, one pattern was universal: IT departments are the most conservative blockers of AI innovation. Not because they're wrong about security — because their incentive structure rewards preventing problems, not enabling innovation.

The winning pattern: build the POC outside IT's gatekeeping (using cloud-based tools with proper data controls), prove it works, then bring IT in to productionize.

Observed across multiple closed-door sessions, 2025-2026
Part 06

Three Examples of What's Working

Not everything works. These are specific patterns where we've seen real results.

These aren't representative of all enterprise AI. They represent specific conditions where the three predictors aligned — accessible data, a named owner, and a clear kill point. The patterns are instructive, not guaranteed.

Example: Industrial Inspection

Voice-to-Report: From 5 Hours to 10 Minutes

Service engineers inspect industrial equipment using voice. AI generates complete technical reports, cross-references observations against thousands of documented failure modes, and flags checks the engineer might have missed.

3-5h
saved per inspection
10 min
for a 14-15 page report
3.5 mo
zero to POC
80%+
stakeholder adoption

Why it worked: Narrow scope, real data from day one, clear metric (hours saved), named owner, and human-in-the-loop designed in — not bolted on after.

Example: AI Sales Enablement

Natural Language Product Matching

Sales and distributor teams describe customer applications in everyday terms. AI matches against product specifications and returns best-fit recommendations with talking points. The challenge: product data only existed in unstructured PDFs — no Excel, no database.

Why it worked: Started narrow (one product family), designed for users who don't remember product names (distributors, not product managers). Champion said it "exceeded expectations" at first design review.

Example: Customer Experience

Conversational Product Finder — 12 Minutes of Engagement

Replaces faceted search with natural language. Users describe what they need. AI guides them through a conversation and generates personalized recommendations. Average engagement: 12-13 minutes — vs. under 30 seconds for most web sessions.

Why it worked: Answer pages, not chatbots. Users don't want back-and-forth chat — they want curated, expert-quality responses. The AI remembers context across the conversation and doesn't ask the same question twice.

Part 07

From Tools to Agents: Where This Is Heading

And what it means for your strategy

McKinsey, 2025: 62% of organizations are experimenting with AI agents, 23% are scaling them in at least one function. BCG AI Radar, Jan 2026: AI agents already account for 17% of total AI value, projected to reach 29% by 2028. The shift is real — but most companies aren't ready for what it means.

Three Modes of AI — Most Companies Only Build One

  • Reactive: User asks, AI answers. Chatbots, search, Q&A. What most companies build today.
  • Proactive: AI reaches out with relevant context. "This customer hasn't reordered in 45 days — historically, that predicts churn." Not waiting to be asked.
  • Ambient: Invisible, background intelligence. Auto-adjusts, learns patterns, optimizes without being asked.

The companies building shared memory across touchpoints now will have a structural advantage that compounds: their AI gets smarter with every interaction. The ones that wait will play catch-up against systems with months or years of learned context.

Your Toolkit

Four Frameworks Built From These Patterns

Not theory — what we use with clients. Each one addresses a specific failure mode documented above.

Pick the framework that fits your situation.

30 minutes. Your use cases, your org, your data. I'll map the right framework to your situation and tell you what the first move is.

Let's Look at Your Top 3 Use Cases →

Or go deeper: a half-day prioritization workshop with 6-8 of your stakeholders. You leave with scored initiatives, named owners, and a 6-week blueprint. Not a PowerPoint — an executable plan.