At a closed-door roundtable — 12 enterprise AI leaders from pharma, chemicals, tech, and financial services — I asked what killed their last AI initiative. Not a single person said technology. The answers: no named owner, legal blocked it, "we're waiting for the platform team," and the most honest: "the person who championed it left." These three gates exist because of those answers.
Red Light / Green Light
Three binary gates that kill or pass an idea. Three multipliers that rank what survives. If any gate is red, the idea stops — no matter how exciting the boardroom demo was.
The Gates — Pass or Kill
The Multipliers — Score 1-5
Business Impact
Revenue-tied or >$500K/yr cost reduction vs. unclear efficiency gain
Org Readiness
Process-driven department with a champion vs. freestyle culture
Time to Value
Working POC in 6 weeks vs. 12+ months to first output
The Six Strategic Decisions
Where Do We Play?
Internal ops, customer experience, or new revenue? 70% of wins start internal — then use credibility for customer-facing moves.
Build, Buy, or Assemble?
Where the most money gets wasted. We see companies spend $2M building what a configured GPT-4 pipeline does out of the box. The winning pattern: assemble — best-of-breed tools, configured for your workflow.
Who Owns It?
Kills more initiatives than bad technology. Winning model: thin central team (2-3 people) sets standards — execution in business units.
Governance Model?
Start with guardrails, not policies. First 2-3 POCs surface 80% of governance questions. Separate "internal" (lightweight) from "customer-facing" (rigorous).
How Do We Measure?
Adoption (week 4) → Efficiency (week 8) → Business Impact (3-6 months). No adoption by week 4? Stop.
How Do We Scale?
Only 20% of pilots scale. It's a change management problem, not technology. Scale through process-driven departments first.
At a closed-door session, two heads of AI from a specialty chemicals firm and a medical devices company independently described the same pattern: teams skipping to Decision 2 ("let's pick a tool!") without making Decision 1 ("where do we play?"). Result: 12 disconnected experiments, none with enough focus to reach production. One reorganized around this sequence and went from 12 pilots to 3 funded programs — all three shipped.
The 6-Week POC-to-Profit Blueprint
Why 6 weeks: Short enough that organizational antibodies can't kill it. Long enough to build something real with actual data. Matches procurement — most departments can approve without board sign-off.
Scope Lock
One use case, one owner, one metric. Define kill criteria: "if X doesn't happen by Week 4, we stop." Max 3 people in the room.
Data + Architecture
Connect to real data — not sample, not synthetic. Build the simplest version that touches real data. 80% data is enough; the POC surfaces which 20% matters.
Build + Iterate
Functional prototype, 2 rounds of user testing with 3-5 actual users (not stakeholders). This is where you set accuracy expectations: "wrong 20-30% of the time, here's the human fallback."
Measure
Run parallel with existing process for 5 days. Hard numbers: time saved, errors caught, cost vs. value. 2 hours/week × 10 people = 1,000 hours/year = €75-150K.
Decide
Scale, Pivot, or Kill. One room, one owner, one hour. Killing in 6 weeks for €50K is a win — you saved €500K.
An industrial equipment manufacturer had engineers spending 3-5 hours per inspection report — writing 14-15 page technical documents by hand. We built a voice-to-report system: engineers speak while they inspect, AI generates the full report in 10 minutes. Trained against thousands of documented failure modes, cross-references spoken observations against known issues, prompts for checks the engineer might miss. 1.5 months discovery, 2 months POC, live in production. Over 80% of stakeholders adopted it — despite knowing the data had quality gaps — because the human-in-the-loop was designed in from day one.
| Traditional | 6-Week Blueprint | |
|---|---|---|
| Time to first result | 6-18 months | 6 weeks |
| Investment before learning | €200K-€2M | €30-75K |
| Kill cost if it fails | Catastrophic | Manageable |
The Operating Model
Strategy — "What and Why"
Quarterly1-2 people. C-level sponsor + someone who actually understands AI. Set portfolio, allocate budget, make kill decisions. If this group meets weekly, something is broken.
Enablement — "How and With What"
Continuous2-3 people. Not a department. Evaluate tools, set standards, build context infrastructure (the layer most companies miss — your AI needs CRM data, brand guidelines, competitive positioning, or it produces generic output nobody approves). Train users continuously, not one-off workshops. Companies spend 10× more on AI platforms than training. A $50/month Copilot license with zero training is a $50/month waste.
Execution — "Build and Run"
2-week sprintsIn the business units. Squad model: 1 owner + 1 builder + 2-3 users. Execute POCs, operate scaled solutions, feed learnings back. The people closest to the problem build the solution.
The Domino Strategy
Each win unlocks the next department through peer proof.
Supply Chain
Process-driven, data-rich
Operations
Internal proof
Customer Service
Two wins → confidence
Sales
Highest resistance, peer proof
Enterprise
Board sees pattern
At our AGX session, someone said: "If you want to drain a lake, don't ask the frogs." That was about governance and compliance functions — they block AI because it threatens their gatekeeping role. The Domino Strategy exists because of this reality: start where resistance is lowest, build proof, then use that proof to unlock the departments with the most organizational antibodies. When supply chain saves 1,000 hours/year, operations asks to be next. That's organic scaling — more powerful than any top-down mandate.
These frameworks work when they're applied to your specific situation.
Not abstract — your data, your org structure, your use cases.
Start with 30 minutes: I'll run Framework 01 on your specific situation and tell you what the first move is.
Let's Look at Your Top 3 Use Cases →