Install
openclaw skills install bookforge-retention-phase-intervention-selectorUse this skill to diagnose which retention phase (initial / medium / long-term) is broken for a user cohort and select the RIGHT type of intervention for tha...
openclaw skills install bookforge-retention-phase-intervention-selectorDiagnose which retention phase is failing for a cohort, then prescribe the interventions appropriate to that phase. The core insight: a week-1 drop-off and a month-6 decline have completely different root causes and require completely different treatments. Mixing them up wastes quarters of experiment cycles.
Research by Frederick Reichheld of Bain & Company found that a 5% increase in customer retention rates increases profits by 25 to 95 percent. Retention is not a vanity metric — it is a leverage point. But only if you know which lever to pull.
Use this skill when:
Prerequisite signals: Your product has cleared product-market fit (stable retention curve exists for at least one cohort segment). If the retention curve never stabilizes, run product-market-fit-readiness-gate first — this skill cannot fix a pre-PMF product.
Before starting, collect:
Read both input files. Extract:
Why: Phase boundaries are product-type-specific. Without knowing the product type, period labels ("Month 1") map to different phases. A Month 1 drop in a mobile app is medium-term. A Month 1 drop in a SaaS product is still initial. Getting this wrong causes the wrong intervention type.
Map the product type to phase boundary definitions:
| Product Type | Initial Phase Ends | Medium Phase Ends | Long-Term Begins |
|---|---|---|---|
| Mobile app | ~Day 1 | ~Week 2–4 | Month 1+ |
| Social network | ~Week 1–2 | ~Month 1–2 | Month 3+ |
| SaaS (subscription) | ~Month 1 or first quarter | ~Month 3–6 | Month 6 / Quarter 3+ |
| E-commerce | ~Day 90 (first 90 days) | ~Month 4–9 | Month 10+ / Year 2 |
| Consumer marketplace | ~Week 2 | ~Month 1–3 | Month 4+ |
These are reference benchmarks (sourced from Lean Analytics sector data and the Balfour three-phase model). Calibrate to your own cohort behavior: the initial phase ends where your churn rate begins to stabilize, not at a fixed calendar point.
Why: Applying a NUX optimization experiment to a 6-month cohort decline is a common and expensive mistake. The boundaries enforce discipline about what type of problem is being solved.
Construct the retention curve from the CSV. If no charting tool is available, build a text table:
Period: W1 W2 W4 W8 W12 W16
Jan cohort: 68% 52% 41% 39% 38% 37%
Feb cohort: 71% 55% 44% 38% 35% 32%
Mar cohort: 65% 41% 29% 22% 19% 17%
Identify:
Why: The shape of the curve tells you which phase is responsible. Reading curve shape is more diagnostic than reading a single retention number. Cohort-to-cohort degradation is a separate failure mode that must be flagged before prescribing interventions.
Apply the following decision rules:
Initial phase broken:
Medium phase broken:
Long-term phase broken:
Resurrection candidate:
Document the diagnosis in one clear sentence: "The [cohort name] cohort shows [phase] phase failure, characterized by [specific curve observation], which indicates [root cause hypothesis]."
Why: Naming the phase locks in the intervention type before generating experiment ideas. Without this gate, teams generate a mix of initial- and long-term tactics and run them simultaneously, making it impossible to isolate what worked.
Before prescribing interventions, check whether aggregate metrics are hiding the cohort-level problem.
Compute: for the three oldest cohorts in the dataset, is the retention rate at their current period lower than it was at the same period for newer cohorts? If yes, early adopters are churning faster — and if overall user counts are stable or growing, new user volume is masking the defections.
Signal: total users are flat or growing, but the Jan and Feb cohorts are at 15% retention by Month 6 while the May cohort is at 38% at Month 2.
If churn masking is detected: Flag it explicitly in the diagnosis. It changes the urgency framing — the company's retention health is worse than aggregate metrics suggest, and the gap will widen as acquisition slows.
Why: Teams that skip this check continue to interpret stable DAU as a retention success. The cohort-level picture reveals compounding LTV damage that will not appear in top-line metrics until acquisition slows or stops.
Select experiments from the appropriate intervention set:
The initial period is a prolonging of the activation experience. Interventions mirror activation tactics:
activation-funnel-diagnostic in parallel — initial retention and activation share root causes and experiment types.The goal is to make the product the default choice for the need it serves.
Long-term retention requires both maintaining the existing experience and expanding it over time.
Prong 1 — Optimize existing features:
Prong 2 — Introduce new features at a staged cadence:
Why two prongs: Long-term churn has two distinct drivers — diminishing returns from the current experience, and failure to discover new value. Optimizing existing features alone does not attract users back once they have mentally "finished" the product. New features alone create confusion without proper onboarding.
If the diagnosis is long-term phase failure and the team has been shipping features at high velocity, run the feature bloat check:
Detection criteria:
If feature bloat is detected: Recommend a feature audit. The Marketing Science Institute research (Thompson, Hamilton, Rust 2005) found that maximizing feature count for initial appeal decreases customer lifetime value — users are overwhelmed and the core value is obscured. The intervention is not more features but clearer progressive disclosure of existing ones.
Why: Feature velocity feels like progress. The feature bloat anti-pattern makes it possible to invest significant engineering effort in long-term retention experiments that actually make retention worse.
If a meaningful percentage of a cohort is dormant (users who were active in an early period but have not been active for several periods, without having formally canceled):
Why: Dormant users already know the product, which makes them cheaper to reactivate than acquiring a new user from scratch. But win-back campaigns without root cause diagnosis have low conversion rates and can create a "desperate" brand perception if poorly calibrated.
Write two documents:
retention-phase-diagnosis.md
## Cohort Analyzed
[Cohort identifier, time grain, date range]
## Product Type and Phase Boundaries
[Product type → phase boundary table as applied to this product]
## Retention Curve Summary
[Text or table representation of curve shape]
## Phase Diagnosis
[Named phase: initial / medium / long-term / multiple]
[One-sentence characterization of curve behavior]
[Root cause hypothesis]
## Churn Masking Check
[Result: detected / not detected]
[Evidence if detected]
## Feature Bloat Check (long-term only)
[Result: risk present / not present]
[Evidence]
## Dormant User Population
[Percentage of cohort dormant, if applicable]
retention-intervention-plan.md
## Phase-Appropriate Interventions
[Ordered list of 5–8 specific experiments, each with:]
- Experiment name
- Hypothesis
- Target metric
- Implementation notes
## Resurrection Plan (if applicable)
[Interview protocol, win-back experiment candidates]
## Anti-Patterns to Avoid
[Churn masking / feature bloat warnings as applicable]
## Dependency Skills
[Links to activation-funnel-diagnostic, monetization-experiment-planner as appropriate]
Retention is phased — tactics do not transfer across phases. NUX optimization applied to a month-6 cohort does not address why long-term users disengage. Diagnosis before prescription is non-negotiable.
Cohort analysis is mandatory — point-in-time retention hides the pattern. Aggregate DAU or monthly active users can be stable or growing while early cohorts are in decline. Always diagnose from the cohort curve, not the headline number.
Acquisition hides churn — compute churn on a fixed cohort. Strong new user growth masks early-adopter defections. A company can be losing its core users while total counts look healthy. Cohort-level churn must be tracked independently.
Habit formation is not a trick — variable rewards work only for products that deserve habit. The Hook Model works when the product genuinely solves a recurring need. Applying engagement loop mechanics to a product that does not merit repeat use accelerates churn by irritating users, not retaining them.
Feature bloat is the long-term retention killer. Adding features at high velocity maximizes initial appeal but decreases customer lifetime value. More features do not equal more retention unless they are discoverable and relevant to existing use cases.
Resurrection is cheaper than new acquisition — but requires root cause first. Win-back campaigns without understanding why users left have low conversion rates and poor brand optics. Interview before you campaign.
Situation: A fitness tracking app shows that 65% of new users are active on Day 1, but only 22% return on Day 7, and only 14% return on Day 14. The curve never plateaus.
Product type: Mobile app. Initial phase = Day 1. Medium phase = Days 2–14. Long-term = Week 3+.
Diagnosis: Initial phase failure. 78% churn within the first week indicates users are not reaching a meaningful experience of core value before disengaging. The curve's failure to plateau confirms there is no retained user segment.
Churn masking check: Total downloads growing 20% month-over-month, which was masking the cohort-level pattern in dashboard reviews.
Interventions prescribed:
activation-funnel-diagnostic to identify the specific funnel step with highest drop-off.Not prescribed: Habit formation experiments, new feature development, or win-back campaigns — all medium- and long-term phase tactics.
Situation: A project management SaaS shows 60% of Month 1 users still active in Month 2, but only 28% in Month 4 and 18% in Month 6. The initial drop is acceptable for a SaaS product; the Month 2–4 drop is steep.
Product type: SaaS. Initial phase = Month 1. Medium phase = Months 2–5. Long-term = Month 6+.
Diagnosis: Medium phase failure. Users are successfully onboarded (strong Month 1 retention) but are not forming a habit of returning. The tool is not yet their default project management environment.
Churn masking check: Not detected — new sales are flat, so the cohort-level pattern is visible in aggregate.
Interventions prescribed:
Not prescribed: NUX redesign (initial phase is healthy), feature bloat audit (feature velocity is not the issue at Month 2–4).
This skill is licensed under CC-BY-SA 4.0. Distilled from Hacking Growth by Sean Ellis and Morgan Brown. Source book is copyright of its respective authors. This skill contains no verbatim excerpts — only synthesized frameworks, process adaptations, and original analysis.
product-market-fit-readiness-gate — Run this first. Retention curve stability is the second PMF signal; this skill assumes PMF has been achieved.
clawhub install bookforge-product-market-fit-readiness-gate
north-star-metric-selector — Retention improvements need a single metric to optimize toward. Select the retention-appropriate North Star before designing experiments.
clawhub install bookforge-north-star-metric-selector
activation-funnel-diagnostic — Initial retention and activation share root causes. Run in parallel when initial phase failure is diagnosed.
clawhub install bookforge-activation-funnel-diagnostic
monetization-experiment-planner — The natural next step once retention is stable. Increasing lifetime value requires a retained user base first.
clawhub install bookforge-monetization-experiment-planner