Ab Predictor

v2.0.0

Predict which ad hook, email subject, or social post wins before you spend a dollar. Depth 3 — confidence intervals on every prediction, rewrite surgery with...

0· 136·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for drivenautoplex1/ab-predictor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ab Predictor" (drivenautoplex1/ab-predictor) from ClawHub.
Skill page: https://clawhub.ai/drivenautoplex1/ab-predictor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ab-predictor

ClawHub CLI

Package manager switcher

npx clawhub@latest install ab-predictor
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (A/B hook prediction) match the provided files and runtime instructions. The SKILL.md and ab_predictor.py implement a rule-based scorer for multiple ICPs and an optional premium backend loaded from a local sibling file. The only runtime dependency declared is python3, which is reasonable for a Python CLI tool.
Instruction Scope
Runtime instructions are limited to running the included Python script with CLI flags (demo, text, variants, rewrite, cross-icp, etc.). The code will attempt to dynamically load a local optional module (content_resonance_scorer.py) from a sibling directory if present; that loaded file would be executed when present. The SKILL.md does not instruct reading unrelated system files or sending data to external endpoints.
Install Mechanism
No install spec or remote downloads — the skill is instruction-only and ships a Python file. This minimizes install-time risk; no archives or third-party package installs are declared.
Credentials
No environment variables, API keys, or credentials are required for the free tier. The premium integration is local-file based and does not declare remote credentials. The absence of requested secrets is proportionate to the described functionality.
Persistence & Privilege
The skill does not request always:true, does not require modifying other skills or global agent config, and does not ask for persistent elevated privileges. Autonomous invocation is enabled by platform default but not combined with other concerning flags.
Assessment
This skill appears internally consistent and runs locally with python3. Before installing or running: 1) Inspect the included ab_predictor.py (already bundled) — it is the primary runtime; 2) Be aware the script will dynamically execute a sibling file named content_resonance_scorer.py if you place one in a content-resonance-scorer directory next to the skill — only add such a file from a trusted source because it will be executed; 3) There are no remote downloads or required secrets for the free tier, but avoid running untrusted code on sensitive hosts; 4) If you plan to use the premium backend, review that backend's code and provenance first and run in a sandbox if you have any doubt.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Any binpython3
latestvk971dfwyrc64fxnm2yefseakmx83r2a9
136downloads
0stars
5versions
Updated 1mo ago
v2.0.0
MIT-0

A/B Hook Predictor — v2.0.0

Know which hook wins before you run the test. Depth 3 upgrade: every prediction now includes a confidence interval, specific rewrite surgery for losers (not just warning flags), and cross-ICP scoring to find the best audience for any hook.

What's New in v2.0.0

  • Confidence intervals91/100 (±8) instead of just 91/100. CI derived from dimension variance: concentrated signals → shakier prediction; spread signals → reliable.
  • Rewrite surgery — losing variants get specific fix instructions per ICP: not "add loss framing" but "add: 'Every month at 580 costs you $410 more than a buyer at 760.'"
  • Cross-ICP scoring (--cross-icp) — score any hook against all 5 ICP profiles to find its best audience. A hook written for crypto-mortgage buyers might score higher for veterans.
  • Quality gate (--min-score=N) — flag variants below threshold with ⛔ marker. Use in content pipelines.
  • Margin analysis — neck-and-neck (<10pt gap) vs dominant winner (>25pt) changes the recommendation: run a real test vs ship immediately.

Free vs Premium

Free tier (no API key, no server needed):

  • --demo — full ranked comparison with rewrite surgery + cross-ICP table, zero API calls
  • --text — score a single piece of copy against any ICP, with --rewrite and --cross-icp support
  • --version — verify install
  • All 5 ICP profiles: crypto-mortgage, credit-repair, va-loan, realtor-partner, first-time-buyer
  • All depth-3 features (CI, rewrites, cross-ICP) run on rule-based scoring — no API key needed

Premium tier (content_resonance_scorer.py backend loaded):

  • --variants — full dimension breakdown on up to 5 variants
  • --json — structured output for agent pipelines
  • Richer CRS-backed feature scoring (TRIBE v2–calibrated weights)
  • Divergence detection between ICP-weighted score and CRS baseline

The free tier is fully functional. Install and use immediately.

What this skill does

Takes 2-5 content variants and a target ICP, scores each against that ICP's neural weight profile, and returns a ranked list with:

  • Composite resonance score (0-100) — weighted by which psychological levers matter most to that specific buyer
  • Per-dimension flags — what's working, what's missing, specific fix suggestions
  • Ranked winner — clear call on which variant to run and why

The key insight: the same content performs differently for different buyer types. "Keep your Bitcoin" scores 91/100 for a crypto-mortgage ICP and 34/100 for a credit-repair lead. This tool makes that gap visible before you spend money testing it.

ICP profiles available

ICP keyBuyer typeTop neural levers
crypto-mortgageBTC/ETH holder buying a home without sellingGain framing, identity alignment, specificity
credit-repair500-680 score, shame-sensitivePacing/empathy, reframe, social proof
va-loanVeteran who doesn't know their full benefitDirect tone, identity (earned this), loss (leaving it unused)
realtor-partnerAgent who needs a lender they can trustReliability signals, social proof, B2B framing
first-time-buyerFirst home, rate-shocked, overwhelmedSimplicity, urgency (program deadlines), monthly payment framing

CLI usage

# Demo: 3 hooks ranked against crypto-mortgage ICP (no API key needed)
python3 ab_predictor.py --demo

# Score one piece of copy against a specific ICP
python3 ab_predictor.py --text "You don't have to sell your BTC to buy a house." --product crypto-mortgage

# Score with rewrite surgery (specific fix instructions if below 70)
python3 ab_predictor.py --text "Buy a home today with great rates." --product va-loan --rewrite

# Compare variants from a JSON file
python3 ab_predictor.py --variants hooks.json --product va-loan

# Compare with rewrite surgery for losers
python3 ab_predictor.py --variants hooks.json --product credit-repair --rewrite

# Find best audience for a hook across all ICPs
python3 ab_predictor.py --variants hooks.json --cross-icp

# Quality gate: flag anything below 60
python3 ab_predictor.py --variants hooks.json --product first-time-buyer --min-score 60

# JSON output for pipelines (includes CI, rewrite suggestions)
python3 ab_predictor.py --variants hooks.json --product credit-repair --json --rewrite | jq '.[0]'

# Check version
python3 ab_predictor.py --version

variants.json format:

[
  {"label": "Hook A — Direct benefit", "text": "You earned zero down. Here's how to use it."},
  {"label": "Hook B — Loss frame", "text": "Every month you wait, another home goes under contract."},
  {"label": "Hook C — Identity", "text": "Veterans are buying homes with $0 down. Here's the exact process."}
]

Demo output

$ python3 ab_predictor.py --demo

A/B Resonance Comparison — ICP: crypto-mortgage
================================================

#1  Hook C — Identity + concrete          91/100  ✅ WINNER
    ✓ Identity alignment: BTC holders — specific, in-group signal
    ✓ Gain framing: "unrealized gains", "appreciate while you build equity"
    ✓ Concrete noun density: $200K-$2M, 2022, Fannie Mae
    ✓ Loss avoidance framing: "zero capital gains event, zero coins sold"
    → Add urgency signal — crypto-mortgage window may not stay open

#2  Hook A — Loss frame                   72/100
    ✓ Second-person: "You don't have to sell..."
    ✓ Specificity: "No capital gains. No missed appreciation."
    ⚠ Missing: identity signal — crypto-mortgage ICP responds to in-group language
    ⚠ Low gain framing — tells them what to avoid but not what they get
    → Add a concrete outcome: "...while your BTC keeps appreciating"

#3  Hook B — Generic                      12/100  ❌ DO NOT RUN
    ✗ Third-person tone: "We offer..." — not second-person
    ✗ Compliance violation: "rates", "pre-approvals" — forbidden words
    ✗ No identity alignment for crypto-mortgage ICP
    ✗ Zero specificity — vague platitudes only
    → Rewrite from scratch. This won't convert the crypto-mortgage buyer.

Predicted winner: Hook C by 19 points over Hook A.
Key differentiator: Identity alignment + concrete specificity trigger reward circuit activation.

Scoring dimensions by ICP

Each ICP has a different neural weight profile — same dimensions, different multipliers:

Dimensioncrypto-mortgagecredit-repairva-loanrealtor-partnerfirst-time-buyer
Gain framing1.8×1.2×1.0×1.1×1.3×
Loss framing0.8×1.9×1.4×1.5×1.6×
Identity alignment1.7×1.3×1.8×1.6×1.0×
Urgency0.6×1.4×1.2×0.8×1.5×
Social proof0.9×1.6×1.1×1.9×1.4×
Simplicity0.8×1.7×1.3×1.0×1.8×
Second-person1.2×1.5×1.6×0.9×1.4×
Concrete nouns1.6×1.1×1.3×1.7×1.2×

Why profiles differ: Validated against ICP research and TRIBE v2 fMRI findings. The credit-repair buyer (shame-sensitive, loss-averse) responds differently than the crypto holder (gain-seeking, autonomy-driven). Using the wrong profile scores 40% lower on average.

Calibration note — TRIBE v2

Neural weight profiles are calibrated against TRIBE v2 (Meta's fMRI brain-response prediction model). The weight multipliers reflect predicted activation in:

  • Reward circuit (mPFC/precuneus): gain framing, identity alignment → crypto-mortgage, realtor-partner ICPs
  • Amygdala/loss circuit: loss framing, urgency → credit-repair, first-time-buyer ICPs
  • Language cortex (STG/IFG): simplicity, second-person → credit-repair, va-loan ICPs

To recalibrate with fresh data: see vault/learnings/2026-03-27-tribe-v2-colab-spec-task47.md.

Integration

# Via Telegram
@openclaw ab-predictor "Compare these hooks for a VA loan buyer: [A] / [B] / [C]"
@openclaw ab-predictor "Which hook wins for credit-repair leads? [paste variants]"

# Pipeline: score hooks, pick winner, pass to content-scorer for final polish
python3 ab_predictor.py --variants hooks.json --product va-loan --json \
  | jq -r '.[0].text' \
  | xargs -I{} python3 ../content-scorer/score_content.py "{}" --platform=facebook

# Batch: test same hooks across all 5 ICPs
for icp in crypto-mortgage credit-repair va-loan realtor-partner first-time-buyer; do
    echo "=== $icp ===" && python3 ab_predictor.py --variants hooks.json --product $icp
done

Use cases

Before running paid ads: "Which of these 3 Facebook hook variants will perform best for first-time buyers?"

Before sending email: "Score these 2 subject lines against credit-repair leads — which one opens more?"

Content calendar optimization: "Rank these 5 LinkedIn hooks for realtor-partner ICP before we schedule them"

Hook diagnosis: "My ad isn't converting credit-repair leads — score it and tell me what's wrong"

Cross-ICP testing: "Run all 5 ICPs against this hook — which buyer type will respond best?"

Comments

Loading comments...