MING Labs · The GenAI Playbook

Four Frameworks for
Enterprise GenAI

From prioritization to production. Built from 50+ enterprise engagements across pharma, chemicals, industrial tech, energy, financial services, and manufacturing.
Framework 01

Red Light / Green Light

From 40 AI ideas to 4 investable initiatives in two weeks

Three binary gates that kill or pass an idea. Three multipliers that rank what survives. If any gate is red, the idea stops — no matter how exciting the boardroom demo was.

The Gates — Pass or Kill

Data Access
✓ Data exists and is accessible within 2 weeks
✗ Data in silos requiring 6+ months of integration
Decision Owner
✓ One person with budget authority says "I own this"
✗ Ownership unclear, shared, or requires committee
Accuracy Threshold
✓ Works at 70-80% with human-in-the-loop fallback
✗ Requires 99%+ with no human review

At a closed-door roundtable — 12 enterprise AI leaders from pharma, chemicals, tech, and financial services — I asked what killed their last AI initiative. Not a single person said technology. The answers: no named owner, legal blocked it, "we're waiting for the platform team," and the most honest: "the person who championed it left." These three gates exist because of those answers.

The Multipliers — Score 1-5

Business Impact

Revenue-tied or >$500K/yr cost reduction vs. unclear efficiency gain

Org Readiness

Process-driven department with a champion vs. freestyle culture

Time to Value

Working POC in 6 weeks vs. 12+ months to first output

Gates (all 3) × Impact × Readiness × Time = Priority Score
Any red gate = zero. Top 4 get investment. We run this as a half-day workshop with 6-8 stakeholders.
📋
Work Through This Framework
Open the Prioritization Canvas — fillable, printable, BMC-style worksheet
Open →
Framework 02

The Six Strategic Decisions

Sequential. Each unlocks the next. Skip one and you're building on sand.
1
Where Do We Play?

Internal ops, customer experience, or new revenue? 70% of wins start internal — then use credibility for customer-facing moves.

⚠ "All three" = diluted resources across disconnected experiments.
2
Build, Buy, or Assemble?

Where the most money gets wasted. We see companies spend $2M building what a configured GPT-4 pipeline does out of the box. The winning pattern: assemble — best-of-breed tools, configured for your workflow.

⚠ "Build" because it feels strategic, or "buy Microsoft" because it feels safe.
3
Who Owns It?

Kills more initiatives than bad technology. Winning model: thin central team (2-3 people) sets standards — execution in business units.

⚠ A "Center of Excellence" that becomes a bottleneck with 15 people and a VP.
4
Governance Model?

Start with guardrails, not policies. First 2-3 POCs surface 80% of governance questions. Separate "internal" (lightweight) from "customer-facing" (rigorous).

⚠ 6 months on an "AI policy" before shipping a single use case.
5
How Do We Measure?

Adoption (week 4) → Efficiency (week 8) → Business Impact (3-6 months). No adoption by week 4? Stop.

⚠ "We trained a model with 95% accuracy" means nothing if nobody uses it.
6
How Do We Scale?

Only 20% of pilots scale. It's a change management problem, not technology. Scale through process-driven departments first.

⚠ Assuming scaling = "just deploy it to more people."

At a closed-door session, two heads of AI from a specialty chemicals firm and a medical devices company independently described the same pattern: teams skipping to Decision 2 ("let's pick a tool!") without making Decision 1 ("where do we play?"). Result: 12 disconnected experiments, none with enough focus to reach production. One reorganized around this sequence and went from 12 pilots to 3 funded programs — all three shipped.

📋
Work Through This Framework
Open the Strategic Decisions Canvas — document each decision with rationale
Open →
Framework 03

The 6-Week POC-to-Profit Blueprint

Proof of profit, not proof of concept. €30-75K instead of €500K.

Why 6 weeks: Short enough that organizational antibodies can't kill it. Long enough to build something real with actual data. Matches procurement — most departments can approve without board sign-off.

Week 1
Scope Lock

One use case, one owner, one metric. Define kill criteria: "if X doesn't happen by Week 4, we stop." Max 3 people in the room.

Can't lock scope in Week 1? The initiative isn't ready.
Week 2
Data + Architecture

Connect to real data — not sample, not synthetic. Build the simplest version that touches real data. 80% data is enough; the POC surfaces which 20% matters.

Output: Working skeleton — ugly but functional.
W 3–4
Build + Iterate

Functional prototype, 2 rounds of user testing with 3-5 actual users (not stakeholders). This is where you set accuracy expectations: "wrong 20-30% of the time, here's the human fallback."

Week 5
Measure

Run parallel with existing process for 5 days. Hard numbers: time saved, errors caught, cost vs. value. 2 hours/week × 10 people = 1,000 hours/year = €75-150K.

Week 6
Decide

Scale, Pivot, or Kill. One room, one owner, one hour. Killing in 6 weeks for €50K is a win — you saved €500K.

An industrial equipment manufacturer had engineers spending 3-5 hours per inspection report — writing 14-15 page technical documents by hand. We built a voice-to-report system: engineers speak while they inspect, AI generates the full report in 10 minutes. Trained against thousands of documented failure modes, cross-references spoken observations against known issues, prompts for checks the engineer might miss. 1.5 months discovery, 2 months POC, live in production. Over 80% of stakeholders adopted it — despite knowing the data had quality gaps — because the human-in-the-loop was designed in from day one.

Traditional6-Week Blueprint
Time to first result6-18 months6 weeks
Investment before learning€200K-€2M€30-75K
Kill cost if it failsCatastrophicManageable
📋
Plan Your POC
Open the POC-to-Profit Planner — 6-week timeline with metrics and Scale/Pivot/Kill gate
Open →
Framework 04

The Operating Model

Where 80% of successful POCs die. Not technology failure — operating model failure.
Strategy — "What and Why"
Quarterly

1-2 people. C-level sponsor + someone who actually understands AI. Set portfolio, allocate budget, make kill decisions. If this group meets weekly, something is broken.

Enablement — "How and With What"
Continuous

2-3 people. Not a department. Evaluate tools, set standards, build context infrastructure (the layer most companies miss — your AI needs CRM data, brand guidelines, competitive positioning, or it produces generic output nobody approves). Train users continuously, not one-off workshops. Companies spend 10× more on AI platforms than training. A $50/month Copilot license with zero training is a $50/month waste.

Execution — "Build and Run"
2-week sprints

In the business units. Squad model: 1 owner + 1 builder + 2-3 users. Execute POCs, operate scaled solutions, feed learnings back. The people closest to the problem build the solution.

The Domino Strategy

Each win unlocks the next department through peer proof.

Supply Chain

Process-driven, data-rich

Operations

Internal proof

Customer Service

Two wins → confidence

Sales

Highest resistance, peer proof

Enterprise

Board sees pattern

At our AGX session, someone said: "If you want to drain a lake, don't ask the frogs." That was about governance and compliance functions — they block AI because it threatens their gatekeeping role. The Domino Strategy exists because of this reality: start where resistance is lowest, build proof, then use that proof to unlock the departments with the most organizational antibodies. When supply chain saves 1,000 hours/year, operations asks to be next. That's organic scaling — more powerful than any top-down mandate.

📋
Design Your Operating Model
Open the Operating Model Canvas — three layers, Domino sequence, anti-pattern checklist
Open →

These frameworks work when they're applied to your specific situation.

Not abstract — your data, your org structure, your use cases.

The Natural Next Step

Most clients start with a half-day prioritization workshop (Framework 1). 6-8 stakeholders in a room. You leave with your top 4 initiatives scored and ranked, a named owner for each, and a 6-week blueprint for the first one. Not a PowerPoint — an executable plan.

Start with 30 minutes: I'll run Framework 01 on your specific situation and tell you what the first move is.

Let's Look at Your Top 3 Use Cases →