Install
openclaw skills install bookforge-product-process-dysfunction-diagnosisDiagnose why product efforts fail despite using Scrum, Agile, or roadmaps. Use when a team ships on time but customers don't adopt features, when leadership asks why there's no innovation despite an Agile process, when a new product leader needs to identify root causes quickly, or when someone asks 'are we doing waterfall disguised as Agile?' Also use when someone says 'we follow the process but nothing lands', 'sales keeps driving our roadmap', 'design is always scrambling to catch up', 'our engineers just build what they're told', or 'customers never adopt what we build.' Scores 10 root causes of product failure across startup, growth, and enterprise stages and produces a prioritized dysfunction report. For culture-wide assessment (innovation vs. execution), use product-culture-assessment. For team-level behaviors and velocity, use product-team-health-diagnostic.
openclaw skills install bookforge-product-process-dysfunction-diagnosisUse this skill when you are:
Preconditions: you have at least one of:
Agent: Before scoring, clarify the company stage — approximately how many engineers are there, and has the company found product/market fit? Stage determines which failure patterns are most likely and which remediations are appropriate.
The 10 root causes below come directly from Cagan's analysis of the waterfall model (Figure 6.1) practiced by the majority of companies: Ideas → Business Case → Roadmap → Requirements → Design → Build → Test → Deploy. Each root cause is a structural problem in this pipeline, any one of which can derail product efforts.
The two inconvenient truths that make the pipeline especially dangerous:
These truths mean the pipeline is not merely inefficient — it is structurally set up to waste the majority of engineering effort.
WHY: The three stages have distinct failure patterns. Applying enterprise remediation to a startup wastes time; applying startup advice to an enterprise ignores real structural constraints.
Classify the company into one of three stages based on engineer count and product/market fit status:
| Stage | Profile | Primary Failure Risk |
|---|---|---|
| Startup | Fewer than ~25 engineers; pre-product/market fit | Burning runway on wrong product; not doing discovery at all |
| Growth | 25 to several hundred engineers; scaling a proven product | Teams don't see the big picture; technical debt accumulates; leadership mechanisms stop scaling |
| Enterprise | Hundreds of engineers; established product and brand | Stakeholders protect existing business; innovation atrophies; design by committee; lack of empowerment |
Stage-specific signals:
Startup:
Growth:
Enterprise:
For each root cause, score based on the evidence available. WHY: The 10 causes are not equally easy to spot — some are visible in artifacts (roadmaps, sprint boards), others require observing team behavior. Scoring each separately prevents a single visible problem from obscuring others.
Scoring rubric:
| Score | Meaning |
|---|---|
| 2 | Active dysfunction — the root cause is clearly present and driving poor outcomes |
| 1 | Partial — the root cause is present but partially mitigated |
| 0 | Absent — the team has addressed or does not exhibit this dysfunction |
Description: Ideas for new product features originate primarily from sales ("we need this to close a deal"), executives, or internal stakeholders rather than from customer insight, data analysis, or the product team's own discovery work.
Detection signals:
Why it matters: This is not the source of best product ideas. It also destroys team empowerment — the team is there to implement, not to solve problems. The team becomes mercenaries rather than missionaries.
Stage context:
Description: Prioritization is gated on a business case that requires knowing two things that cannot actually be known before building: how much revenue/value the idea will generate, and how much it will cost to build.
Detection signals:
Why it matters: We cannot know either input to the business case at this stage. Revenue depends entirely on how good the solution turns out to be — some ideas generate nothing at all (this is confirmed by A/B testing data). Cost depends on the actual solution, which hasn't been designed. The business case process creates false certainty while adding overhead without reducing the risk of building the wrong thing.
Description: The product roadmap is treated as a commitment to stakeholders about what will be built and when, rather than as a prioritized hypothesis about how to create value.
Detection signals:
Why it matters: Most roadmap items are features and projects, not outcomes. The roadmap model predictably leads to teams shipping things that don't meet objectives — orphaned projects that were completed but didn't move the needle. Projects are output; product is about outcome.
Description: The PM role is primarily about gathering requirements, writing user stories, and tracking delivery — project coordination — rather than discovering what customers need and finding the best solution.
Detection signals:
Why it matters: This is 180 degrees from the reality of modern product management. Gathering requirements and documenting them for engineers is project management. Product management is discovering what customers need, finding a solution that works for the business, and working collaboratively with design and engineering to bring it to life.
Description: UX and product design are engaged after requirements are defined, treating design as execution rather than as a discovery and problem-solving function.
Detection signals:
Why it matters: When design enters after requirements are set, the fundamental problem-solution fit has already been locked in. Design can only "put a coat of paint on the mess." The UX designers know this is not good, but they try to make it as nice as possible given the constraints. The real value of design — finding the right solution before anything is built — is lost entirely.
Description: Engineers only see product work when it arrives at sprint planning as a defined requirement or design spec, excluding them from the discovery and ideation process.
Detection signals:
Why it matters: Engineers are typically the best single source of innovation in a product team. By using engineers only for delivery, the organization gets approximately half their value. Engineers are aware of technical capabilities and constraints that enable entirely new product possibilities that neither PMs nor designers would think of — but only if they are in the room when problems are being explored.
Description: Agile methods (Scrum, Kanban) are applied exclusively to the engineering delivery phase, while the upstream process — idea sourcing, business cases, roadmaps, requirements — remains a waterfall pipeline.
Detection signals:
Why it matters: Teams using Agile in this way are getting approximately 20% of the actual value and potential of Agile methods. The core Agile benefit — fast feedback loops to learn and adapt — requires that discovery and definition are also iterative, not just delivery. What you are actually seeing is Agile for delivery, but the rest of the organization and context is waterfall.
Description: The organization funds, staffs, and measures projects rather than products — treating product work as a series of discrete initiatives with start and end dates rather than as ongoing discovery and improvement.
Detection signals:
Why it matters: Projects are output; product is about outcome. A project-centric model predictably leads to orphaned launches — something was shipped but didn't meet its objectives, and no one owns improving it. There is simply no way to build strong products without the ability to iterate and improve based on real-world data.
Description: Customer feedback and validation only happen after the product is built — during user acceptance testing, beta programs, or post-launch — rather than during discovery before investment is made.
Detection signals:
Why it matters: This is the biggest flaw of the waterfall process. All the risk is at the end. The key principle in Lean methods is to reduce waste, and one of the biggest forms of waste is to design, build, test, and deploy a feature or product only to find out it is not what was needed. Many teams believe they are applying Lean principles while following exactly this pattern — trying out ideas in one of the most expensive, slowest ways possible.
Description: The cost of not doing alternative work — the opportunity cost — is never accounted for when evaluating the work the team is currently doing.
Detection signals:
Why it matters: While the team is busy executing a process that generates a high rate of waste, the biggest loss is usually not the wasted effort itself — it is the opportunity cost of what the organization could have and should have been doing instead. That time and money cannot be recovered. The value of discovering what not to build — and redirecting to higher-value work — is invisible until it is calculated.
Sum the scores across all 10 root causes:
Total dysfunction score = sum of scores (0–20 scale)
| Total Score | Severity | Interpretation |
|---|---|---|
| 16–20 | Critical | Fundamental process transformation required; systemic waterfall in a non-waterfall costume |
| 11–15 | High | Multiple serious dysfunctions; significant innovation and adoption risk |
| 6–10 | Moderate | Several isolated dysfunctions; targeted fixes will produce meaningful improvement |
| 1–5 | Low | Minor process gaps; incremental tuning sufficient |
| 0 | Healthy | Process is largely sound |
Automatic escalation: Root causes 5 (late design), 6 (engineers excluded), and 9 (late customer validation) each represent structural risks that can independently invalidate an entire product process. If any of these three score 2, escalate the overall severity by one level regardless of total score.
WHY: These three are the core of what distinguishes product discovery from project execution. A team that scores well on the other seven but has all three of these active is still fundamentally not doing product development — they are doing project delivery.
WHY: The same root cause has different remediation paths depending on company stage. A startup cannot adopt enterprise-scale discovery rituals; an enterprise cannot simply "empower the team" without structural change.
Startup remediations (pre-product/market fit):
Growth-stage remediations (scaling a proven product):
Enterprise remediations (consistent innovation challenge):
WHY: This is the most common process misdiagnosis. Teams believe they are Agile because they use Scrum, but the upstream process that feeds the sprint is waterfall. Naming this explicitly is essential for any remediation conversation.
The pattern is present if ALL of the following are true:
Diagnostic test — ask these three questions:
Report label: If the pattern is detected, explicitly state: "This process is waterfall-disguised-as-agile. The Agile practices in use (sprints, standups, retrospectives) apply to approximately 20% of the product lifecycle — the delivery phase. The upstream process remains sequential and output-focused."
Structure the output as:
## Product Process Dysfunction Report
**Organization/Team:** [name]
**Company Stage:** [startup / growth / enterprise]
**Assessment Date:** [date]
---
### Overall Dysfunction Score: [X/20] — [SEVERITY]
**Waterfall-Disguised-as-Agile:** [Yes / No / Partial]
| Root Cause | Score | Severity | Key Signal Observed |
|------------|-------|----------|---------------------|
| 1. Sales/stakeholder-driven ideas | X | [label] | [1-sentence evidence] |
| 2. Unknowable business cases | X | [label] | [1-sentence evidence] |
| 3. Roadmap as commitment | X | [label] | [1-sentence evidence] |
| 4. PM as project manager | X | [label] | [1-sentence evidence] |
| 5. Design brought in too late | X | [label] | [1-sentence evidence] |
| 6. Engineers excluded from ideation | X | [label] | [1-sentence evidence] |
| 7. Agile delivery only (20% value) | X | [label] | [1-sentence evidence] |
| 8. Project-centric not product-centric | X | [label] | [1-sentence evidence] |
| 9. Customer validation at the end | X | [label] | [1-sentence evidence] |
| 10. Opportunity cost ignored | X | [label] | [1-sentence evidence] |
---
### Two Inconvenient Truths Assessment
| Truth | Impact Given Current Process |
|-------|------------------------------|
| At least 50% of ideas won't work | [How many ideas are being built without pre-validation? What is the estimated waste?] |
| Good ideas need several iterations | [Does the process allow for post-launch iteration? Is there team ownership that enables it?] |
---
### Automatic Escalation Triggers
[List any of causes 5, 6, 9 scoring 2 and their escalation impact]
---
### Waterfall-Disguised-as-Agile Analysis
[Result of the three diagnostic questions. If pattern detected, state it explicitly.]
---
### Stage-Appropriate Remediation Plan
**Company Stage:** [startup / growth / enterprise]
Ordered by: (1) escalation triggers first, (2) highest score, (3) cross-cause impact
| Priority | Root Cause | Current State | Target State | Stage-Specific Action |
|----------|-----------|---------------|--------------|----------------------|
| 1 | ... | ... | ... | ... |
---
### Summary
[3–5 sentences: what the process looks like from the outside, what is actually broken, the primary waste being generated, and the single most important change to make first]
Before delivering the report:
WHY: The most common failure of a dysfunction diagnosis is that it becomes a laundry list that overwhelms rather than a prioritized argument for change. The report must end with a clear "start here."
Full root cause detail with extended detection signals and remediation patterns:
references/root-cause-reference.md
This skill is licensed under CC-BY-SA-4.0. Source: BookForge — Inspired How To Create Tech Products by Unknown.
This skill is standalone. Browse more BookForge skills: bookforge-skills