Install
openclaw skills install bookforge-activation-funnel-diagnosticUse this skill to diagnose where in an activation funnel users drop off and decide between removing friction or adding 'positive friction' (guided steps) to...
openclaw skills install bookforge-activation-funnel-diagnosticUse this skill when users are signing up but not coming back — the classic activation gap. Specifically run it when:
Prerequisite: The aha moment must be defined. If it is not, run north-star-metric-selector first — the aha moment is the activation target, and diagnosing a funnel without knowing its destination produces useless results.
Before starting, confirm you have or can locate:
| Input | Required | Expected Format |
|---|---|---|
funnel-metrics.csv | Required | Columns: step_name, users_entered, users_completed, channel (optional) |
activation-flow.md | Required | Prose or numbered list describing each onboarding step |
survey-responses.md | Optional | User verbatim responses at drop-off points, or email/interview notes |
| Aha moment definition | Required | One sentence: the moment users first experience core product value |
If the aha moment is not confirmed, ask: "What is the single action or outcome that makes this product feel indispensable to a new user?" Do not proceed until you have an answer.
Ask the growth PM to state the aha moment in one sentence. If they cannot, surface a working hypothesis from the activation flow doc ("completing first X" or "seeing first Y") and ask them to confirm or correct it.
Why: The aha moment is the activation target — every funnel step is evaluated by how well it moves users toward that moment. Optimizing a funnel without a defined endpoint means you may be improving steps that lead nowhere near core value. The aha moment is defined through research; it is never assumed.
Typical aha moment patterns:
Open funnel-metrics.csv. Confirm columns are present: step_name, users_entered, users_completed. The channel column is optional but critical if present.
Compute for each step:
conversion_rate = users_completed / users_entered × 100
drop_off_count = users_entered - users_completed
drop_off_rate = 1 - conversion_rate
Flag any step where drop_off_rate > 0.40 (over 40% of entering users do not complete the step) as a high-priority investigation point.
Why: Raw user counts obscure the conversion shape. Computing rates per step reveals where the funnel narrows most sharply. The highest drop-off step — not the first step, not the last — is the highest-leverage point for experimentation. Treating all steps equally wastes experiment budget on low-impact changes.
Read activation-flow.md. For each step in the funnel metrics, map it to the corresponding description in the flow doc. Note:
Why: Funnel data shows where users drop off; the flow doc shows what they are being asked to do at that point. The combination reveals the gap between what the product asks and what users are willing to do. A high drop-off rate on a "create account" step means something different than a high drop-off on "configure your first workflow" — the flow doc supplies the context that the CSV cannot.
Construct a markdown table with steps as rows. If the channel column exists in the CSV, add columns for each channel. Compute per-channel conversion rates for each step.
| Step | Overall Conv% | Organic | Paid | Referral | Social |
|-------------------|---------------|---------|------|----------|--------|
| App download | 100% | 100% | 100% | 100% | 100% |
| Account created | 68% | 74% | 51% | 81% | 62% |
| First action | 41% | 48% | 29% | 57% | 38% |
| Aha moment | 23% | 31% | 14% | 38% | 21% |
Flag any channel-step combination where conversion is 2× worse than the channel average for that step. These are broken-channel signals.
Why: Averaging across channels hides broken acquisition paths. A paid channel that converts at half the rate of organic at the "first action" step indicates a language or expectation mismatch — the ad promised something the onboarding does not deliver. Fixing the onboarding for everyone does not solve a channel-specific mismatch; it dilutes the fix. Channel segmentation before diagnosis is not optional.
Name the single step with the highest absolute drop_off_count. This is the primary intervention target. If two steps are close, pick the one earlier in the funnel — fixing it compounds downstream.
State it explicitly:
Why: The highest drop-off step represents the most users who gave up before experiencing the product's core value. Every user who drops here is a user the acquisition spend paid to reach but failed to convert. This step is where the diagnosis concentrates.
Use two sources in priority order:
Source A — Survey data (if available). Read survey-responses.md. Look for recurring themes: confusion about what to do next, missing information, unexpected requirements, unclear value, distrust, technical problems. Cluster responses by theme. Do not project your own assumptions onto them.
High-signal question patterns to look for in the data:
Source B — Structural inference (if no survey data). Examine the flow doc description of the high-drop-off step. Ask:
Why: Funnel data is behavioral; it shows that users drop, not why. Survey data is the only direct source of the reasoning behind behavior. Inferring from flow structure is second-best but necessary when survey data does not exist. The book's clearest lesson from the HubSpot Sidekick case: teams that assumed they understood drop-off causes (poor product education) ran 11 failed experiments. The real cause (users needed a trigger to act, not more explanation) only emerged from deeper data analysis and user feedback.
Apply this formula to evaluate the drop-off step:
DESIRE – FRICTION = CONVERSION RATE
Diagnosis routes:
Route A — Remove friction. Apply when:
Remove-friction tactics: single sign-on (Facebook/Google/LinkedIn login); fewer required fields; deferred account creation (let users start using the product before signing up); pre-filling known information; clearer copy and error messages.
Route B — Add positive friction. Apply when:
Positive friction tactics: a learn flow — guided steps that show users what the product does while getting them to take small actions (interest selection, profile setup, first content creation); progress indicators; questionnaires that both collect data and create commitment; gamification (missions, milestones, earned rewards) where the rewards have clear relevance to core value.
The counterintuitive rule: More steps in onboarding is not always worse. Pinterest's addition of a topic-selection screen increased activation 20%. Twitter's learn flow — which required new users to follow accounts and set up a profile before arriving at a feed — produced users with a live feed on first visit instead of an empty one. The question is never "how many steps?" but "does each step help users arrive at the aha moment with greater confidence and context?"
Why: DESIRE and FRICTION are independent variables. A product with strong desire (early adopters, strong referrals) can tolerate high friction — users push through. A product reaching mainstream users or users who came through a lower-intent channel needs low friction at the exact same steps. The formula makes the diagnostic explicit: if desire is high and conversion is still low, friction is the problem. If desire is low, adding guided steps to help users understand value is the fix — removing friction alone will not help users who do not yet see why they should complete the step.
Produce a ranked list of 3–6 experiment candidates targeting the highest-drop-off step. Each entry includes:
Prioritize low-effort, fast-signal experiments first. A simple copy change or form-field removal can be tested in days; a full learn flow redesign cannot. Start small — the HubSpot Sidekick team ran 11 failed experiments before finding the trigger message that moved the needle.
Why: An experiment list without prioritization creates a queue that teams work through in arbitrary order. Low-effort experiments run faster, generate learnings sooner, and compound. If a low-effort fix solves the problem, the high-effort rebuild was never needed. Ranking by effort and signal speed is the minimum viable prioritization for activation experiments.
For full experiment scoring (ICE: Impact × Confidence × Ease), pass the candidates list to growth-experiment-prioritization-scorer.
Write two files:
activation-funnel-diagnosis.md — contains:
activation-experiment-candidates.md — contains:
growth-experiment-prioritization-scorer for ICE scoringWhy: Two separate files keep the diagnosis (what is wrong and why) distinct from the experiment backlog (what to try). The diagnosis is a durable artifact that explains the current state; the experiment list is a working backlog that will change as experiments run. Keeping them separate prevents the team from treating hypotheses as diagnoses before they are tested.
The aha moment is defined, not assumed. Diagnosing an activation funnel without a clear aha moment is optimizing toward an undefined goal. The aha moment comes from product research (must-have surveys, qualitative interviews) — not from guessing the most impressive-looking step in the onboarding flow.
Segment before optimizing — channel averages hide broken channels. A 30% average activation rate across channels may be a 50% rate in organic and 15% in paid. Fixing the onboarding for everyone does not fix the paid channel. Segmentation is not a nice-to-have; it determines whether your interventions are targeted or scatter-shot.
Remove vs. add friction is a diagnostic decision, not a preference. "Simplify everything" is a default, not a diagnosis. Sometimes more steps improve activation by ensuring users arrive at the aha moment with context and commitment. The question is always: why is this step causing drop-off — confusion/blocking (remove friction) or lack of context/commitment (add positive friction)?
Positive friction is counterintuitive and often correct for new-concept products. If your product asks users to adopt a new behavior or understand a novel concept, stripping all onboarding steps will produce users who arrive at core functionality with no idea what to do. Guided steps that teach and commit simultaneously — as Twitter's learn flow demonstrated — can generate higher activation than minimal-friction raw product access.
Survey completers, not just abandoners. People who passed a difficult step know what nearly stopped them. "What's the one thing that nearly stopped you from completing?" asked at the order confirmation or activation screen consistently produces higher response rates and more actionable qualitative data than exit surveys of people who left.
Triggers must be tested, not assumed helpful. Push notifications and email reactivation messages are among the most powerful and most abused activation tools. Deploy them only when the rationale is clear value to the user (a sale on a saved item, a relevant feature alert) — not to inflate short-term engagement statistics. Ask for notification opt-in only after users have experienced enough value to understand why they would want the messages. Test trigger timing, frequency, and copy as experiments, not as settled design.
Situation: A B2B analytics tool has 1,200 users sign up per month. Only 180 (15%) reach the aha moment (generating a first report). The team has funnel metrics but no survey data.
Process summary:
Output:
activation-funnel-diagnosis.md: confirms empty-state as root cause, paid channel mismatch, positive-friction recommendationactivation-experiment-candidates.md: 4 experiments ranked by effortSituation: A recipe and grocery app has 8,000 weekly installs. Funnel: app open (100%), browse items (72%), add to cart (48%), enter payment info (31%), first purchase (19%). Team has exit survey responses from users who reached the cart but did not purchase.
Process summary:
Output:
activation-funnel-diagnosis.md: payment-step friction identified, two specific causes from survey data, remove-friction recommendationactivation-experiment-candidates.md: 3 experiments ranked, first two directly address surveyed reasons for abandonmentreferences/activation-concepts.md — aha moment definition, DESIRE–FRICTION=CONVERSION formula, positive friction definition, NUX principles, BJ Fogg behavior modelreferences/case-studies.md — HubSpot Sidekick segmentation case, Airbnb sign-up prompt experiments, Twitter learn flow, Pinterest topic-selection onboarding, Qualaroo 50-response tipping pointCC BY-SA 4.0 — BookForge Skills
Source book: Hacking Growth by Sean Ellis and Morgan Brown. Skills distilled from book content under fair use for transformative educational purposes. See BookForge copyright framework.
clawhub install bookforge-north-star-metric-selector — defines the aha moment this skill uses as its activation target; run first if the aha moment is not confirmedclawhub install bookforge-growth-experiment-prioritization-scorer — apply ICE scoring (Impact × Confidence × Ease) to the experiment candidates this skill producesclawhub install bookforge-retention-phase-intervention-selector — initial retention is a continuation of activation; users who activated but do not return are a retention problem that begins at the activation boundaryclawhub install bookforge-product-market-fit-readiness-gate — if activation rates are catastrophically low across all channels and positive-friction experiments fail, the product may not yet be must-have; this gate diagnoses whether product work should precede growth work