Progressive Validator

v1.0.1

Multi-stage backtest validation framework — fail fast with short windows (smoke/stress/medium/full) before committing to expensive full-period backtests, sav...

0· 158·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tltby12341/progressive-validator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Progressive Validator" (tltby12341/progressive-validator) from ClawHub.
Skill page: https://clawhub.ai/tltby12341/progressive-validator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install progressive-validator

ClawHub CLI

Package manager switcher

npx clawhub@latest install progressive-validator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included code and SKILL.md. The validator implements stage orchestration, local result persistence, and command suggestion for an external backtest CLI (backtest-poller). Required binary (python3) is appropriate and proportionate.
Instruction Scope
SKILL.md and validator.py confine actions to suggesting commands, recording stage results, and printing status. They do not read unrelated system files or environment variables. However, the runtime instructions assume you will execute an external script (../backtest-poller/cli.py); executing that script is outside this skill and could run arbitrary code if the external CLI is untrusted.
Install Mechanism
No install spec or remote downloads. The package is delivered as code files (validator.py, config example) and a requirements.txt declaring no dependencies. Nothing is written to disk by an installer beyond the user's explicit run of the script.
Credentials
The skill requests no environment variables or credentials and the code does not access secrets. The only external dependency is the backtest-poller CLI (path is user-specified); ensure that CLI is the legitimate tool you expect before running suggested commands.
Persistence & Privilege
The skill persists validation_results.json (configurable) in the current working directory and will overwrite if present. It does not request persistent platform privileges nor set always:true. Running in an isolated/workspace directory is recommended to avoid accidental file overwrite.
Assessment
This skill appears coherent and self-contained: it orchestrates backtest stages and stores local results. Before using it, (1) inspect validator.py yourself (it is included) and confirm you are comfortable with it writing validation_results.json in your working directory, (2) only run the suggested backtest submission command if your backtest-poller CLI is the legitimate tool you installed (don’t run a sibling ../backtest-poller/cli.py from an untrusted location), and (3) run the tool in an isolated directory or under version control to avoid accidental overwrite of files. If you want extra assurance, run the script in a disposable environment or review the backtest-poller code it calls.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎯 Clawdis
Binspython3
latestvk9760fncfvhnz49at89183db7n8326ta
158downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Progressive Validator

Stop wasting 3 hours on a backtest that was doomed from the start. This skill implements a multi-stage validation pipeline that eliminates bad strategies in 15 minutes instead of 3 hours.

When to use

  • "Validate this strategy"
  • "Run the progressive test pipeline"
  • "Is this strategy worth a full backtest?"
  • When planning the validation sequence for a new strategy variant

The Pipeline

     15 min                30 min               1 hour              3 hours
  +-----------+       +-----------+       +-----------+       +-----------+
  |   SMOKE   | pass  |  STRESS   | pass  |  MEDIUM   | pass  |   FULL    |
  |  3 months |------>|  5 months |------>| 18 months |------>|  3 years  |
  |  DD < 50% |       |  DD < 45% |       |  DD < 42% |       |  DD < 40% |
  +-----------+       +-----------+       +-----------+       +-----------+
       | fail              | fail              | fail              | fail
       v                   v                   v                   v
    REJECT              REJECT              REJECT              REJECT
   (15 min lost)      (45 min lost)       (1.5h lost)        (3h+ lost)

Without progressive validation: Every failed strategy costs 3 hours. With progressive validation: Most failures caught in 15-45 minutes.

Validation Stages

Stage 1: Smoke Test

  • Period: 2024-01-01 to 2024-03-31 (3 months)
  • Time: ~15-20 minutes
  • Threshold: Drawdown < 50%
  • Purpose: Catch compilation errors, logic bugs, and catastrophic structural flaws
  • What it covers: Q1 2024 (includes major tech rallies)

Stage 2: Stress Test

  • Period: 2024-02-01 to 2024-06-30 (5 months)
  • Time: ~25-30 minutes
  • Threshold: Drawdown < 45%
  • Purpose: Test survival during the hardest market conditions
  • What it covers: 2024 H1 — historically the worst "meat grinder" period for options strategies

Stage 3: Medium

  • Period: 2024-01-01 to 2025-06-30 (18 months)
  • Time: ~45-60 minutes
  • Threshold: Drawdown < 42%
  • Purpose: Validate across bull/bear transitions and seasonal effects
  • What it covers: Full 2024 volatility + 2025 early recovery

Stage 4: Full Period

  • Period: 2023-01-01 to 2026-01-31 (3 years)
  • Time: ~2-3 hours
  • Threshold: Drawdown < 40%, Sharpe >= 2.0, Profit >= 300%
  • Purpose: Final acceptance test — benchmark against proven strategies
  • What it covers: Complete market cycle including 2023 AI rally, 2024 correction, 2025 recovery

Usage

Configure windows

Define your validation windows in config:

BACKTEST_WINDOWS = {
    "smoke_test": {
        "start": "2024-01-01",
        "end": "2024-03-31",
        "max_dd": 0.50,
        "expected_time": "15-20 min",
        "purpose": "Eliminate garbage fast",
    },
    "stress_test": {
        "start": "2024-02-01",
        "end": "2024-06-30",
        "max_dd": 0.45,
        "expected_time": "25-30 min",
        "purpose": "Survive worst conditions",
    },
    "medium": {
        "start": "2024-01-01",
        "end": "2025-06-30",
        "max_dd": 0.42,
        "expected_time": "45-60 min",
        "purpose": "Bull/bear transition stability",
    },
    "full": {
        "start": "2023-01-01",
        "end": "2026-01-31",
        "max_dd": 0.40,
        "expected_time": "2-3 hours",
        "purpose": "Final benchmark acceptance",
    },
}

Run each stage

Prerequisite: This skill coordinates validation stages. Actual backtest submission is handled by the backtest-poller skill (cli.py). Ensure that skill is installed and available on your path before running these commands.

# Stage 1: Smoke
# (using backtest-poller skill's cli.py)
python3 ../backtest-poller/cli.py submit \
  --main-file strategy.py --name "M31_smoke"

# Check what to run next:
python3 validator.py next M31 strategy.py

# Record the result after smoke completes:
python3 validator.py record M31 smoke_test --status passed --drawdown 0.32 --sharpe 2.1

# Stage 2: Stress (only if smoke passed)
python3 ../backtest-poller/cli.py submit \
  --main-file strategy.py --name "M31_stress"

python3 validator.py record M31 stress_test --status passed --drawdown 0.38 --sharpe 2.0

# Stage 3: Medium (only if stress passed)
python3 ../backtest-poller/cli.py submit \
  --main-file strategy.py --name "M31_medium"

python3 validator.py record M31 medium --status passed --drawdown 0.35 --sharpe 2.3

# Stage 4: Full (only if medium passed)
python3 ../backtest-poller/cli.py submit \
  --main-file strategy.py --name "M31_full"

python3 validator.py record M31 full --status passed --drawdown 0.30 --sharpe 2.5 --profit 3.2

Skip Rules

Not every change needs to start from Smoke:

Change TypeStart From
Entry logic changedSmoke (Stage 1)
Structural change (position sizing, survival)Smoke (Stage 1)
Profit management onlyMedium (Stage 3)
Date/parameter tweakSame stage as before

Early-Stop Integration

This skill works alongside the backtest-poller skill (a separate package). The backtest-poller's early-stop feature monitors drawdown in real time and deletes the backtest run if the threshold is exceeded after 20% progress — no need to wait for full completion of a doomed run. This validator tracks which stages passed or failed locally, so you always know where to resume.

Dependency: Install the backtest-poller skill to enable submit/early-stop functionality. This validator does not submit backtests itself.

Time Savings Example

Testing 5 strategy variants, 3 of which are bad:

ApproachTime
Full backtest only5 x 3h = 15 hours
Progressive validation3 x 15min + 1 x 45min + 1 x 3h = ~4.5 hours

Savings: ~70% of compute time.

Rules

  • Never skip stages without justification. The skip rules table above defines the only valid exceptions. If entry logic or survival structure changed, you must start from Smoke.
  • A strategy must pass a stage before advancing. Do not promote a strategy to the next stage if the current stage resulted in early-stop or failure.
  • Do not modify stage thresholds mid-validation. Changing max_dd between stages invalidates the progressive guarantee. Decide thresholds before starting.
  • One strategy variant per validation run. Do not change the strategy code between stages — the point is to validate the same code across increasingly demanding windows.
  • Record every result, even failures. Use python3 validator.py record <strategy> <stage> --status passed|failed to persist outcomes. Unrecorded results break the next and status commands.

Comments

Loading comments...