Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Strategy Workflow

Comprehensive strategy development workflow from ideation to validation. Use when creating trading strategies, running backtests, parameter optimization, or...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 335 · 0 current installs · 0 all-time installs
byDan Repaci@ahuserious
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The stated purpose (strategy development, backtests, distributed optimization) plausibly needs SSH, GPUs, tmux, python, and access to remote storage/DBs; those tools and credentials are referenced repeatedly in the instructions but are not declared in the registry metadata. It's plausible but the skill should have required env vars (SSH keys/host, Vast.ai/API creds, DB URL) and required binaries listed.
!
Instruction Scope
SKILL.md instructs the agent to run system-level scripts (start_swarm_watchdogs.sh, launch tmux loops), run pgrep/kill/relaunch logic, read/write many persistent state/log files under workspace/docs, scp/ssh to remote hosts, and ingest private docs — all of which grant broad read/write and network access. The instructions are prescriptive about autonomous, always-on remediation and continuous communication; that scope is large and not constrained to the stated purpose.
Install Mechanism
This is an instruction-only skill with no install spec or code files to execute. That reduces direct installation risk, but the runtime commands it tells the agent to run will invoke external scripts (which are not bundled) and system tools on the host.
!
Credentials
The skill declares no required env vars or primary credential, yet the workflow expects SSH/scp access (HOST/PORT), potential cloud provider credentials (Vast.ai or vendor APIs), and optional DB connection strings (Postgres RDBStorage). The absence of declared credentials is a mismatch and could lead an agent to request or use secrets ad-hoc.
!
Persistence & Privilege
Although registry flags do not force always-on inclusion, the skill's instructions push for always-on, autonomous watchdogs that persist state, auto-heal, and relaunch processes. That behavior would write persistent logs/state, modify system process state, and perform network operations — a high privilege footprint that should be explicitly declared and limited.
What to consider before installing
This skill's instructions expect system-level and network privileges (ssh/scp, tmux, process management, persistent state files, optional DB and vendor ingest) but the package declares none of the required credentials or binaries. Before installing or running: 1) Request the actual scripts referenced (start_swarm_watchdogs.sh, launch_parallel.sh, optimize_strategy.py, validation.py, etc.) and review them line-by-line — do not run blindly. 2) Require explicit declarations of required environment variables/keys (SSH private key, HOST/PORT, Vast.ai or cloud API keys, Postgres URL) and only provide them in isolated test environments. 3) Prefer running first in a sandboxed VM/container with no sensitive host mounts or network access. 4) Disable autonomous execution unless you can audit all control-plane code; run manually until you confirm intended behavior. 5) If you need this functionality, demand the skill author add explicit metadata (required env vars, binaries) and include or point to the referenced scripts and installation instructions. These inconsistencies justify caution — the workflow could be legitimate, but it currently asks for broad, implicit privileges without transparency.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk975s4e0vnbqkapd366ft7ghq981zcvb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Strategy Workflow

Comprehensive strategy development workflow for quantitative trading, from hypothesis to validated production deployment.

Overview

This skill provides a complete framework for developing, testing, and validating trading strategies. It supports:

  • Hypothesis-driven strategy development
  • Multi-GPU backtesting on Vast.ai
  • Bayesian hyperparameter optimization with Optuna
  • Walk-forward validation and out-of-sample testing
  • Automated tearsheet generation

Entry Points

Control Plane (Swarm Orchestration)

Always-on watchdog loops that manage hardware utilization and self-healing:

bash scripts/start_swarm_watchdogs.sh

For local environments, set explicit paths:

VENV_PATH=/path/to/.venv/bin/activate \
RESULTS_ROOT=/path/to/backtests \
STATE_ROOT=/path/to/backtests/state \
LOGS_ROOT=/path/to/backtests/logs \
bash scripts/start_swarm_watchdogs.sh

Work Plane (Parallel Execution)

Unified wrapper that starts control plane and launches parallel work:

scripts/backtest-optimize --parallel

Multi-GPU, multi-symbol execution:

cd WORKFLOW && ./launch_parallel.sh

Single-Symbol Pipeline

For focused optimization on a single asset:

scripts/backtest-optimize --single --symbol SYMBOL --engine native --prescreen 50000 --paths 1000 --by-regime

Strategy Development

1. Hypothesis Formulation

Define your strategy hypothesis in measurable terms:

  • What market inefficiency are you exploiting?
  • What is the expected holding period?
  • What are the entry/exit conditions?
  • What is the target risk-adjusted return?

2. Feature Selection

Identify relevant features for signal generation:

  • Price-based (OHLCV, returns, volatility)
  • Technical indicators (EMA, RSI, Bollinger Bands)
  • Multi-timeframe features (MTF resampling)
  • Volume analysis (PVSRA, VWAP)
  • Market microstructure (order flow, spread)

3. Signal Generation

Convert features into actionable signals:

  • Directional bias (trend following, mean reversion)
  • Entry conditions (threshold crossings, pattern recognition)
  • Exit conditions (take-profit, stop-loss, trailing stops)
  • Position sizing rules

4. Position Sizing

Implement risk-aware position sizing:

  • Fixed fractional
  • Kelly criterion
  • Volatility-adjusted
  • Regime-dependent scaling

Backtesting

Pre-Flight Validation

MANDATORY before every optimization run:

python validation.py --check-all --data-path DATA_PATH --symbol SYMBOL

Validation checks:

  • Data >= 90 days with no gaps/NaN
  • Min trades >= 30 for statistical significance
  • MTF resampling implemented correctly
  • No look-ahead bias

Multi-GPU Execution on Vast.ai

Deploy to cloud GPU instances for large-scale parameter sweeps:

# Copy workflow files
scp -P PORT workflow_files root@HOST:/root/WORKFLOW/

# Run optimization
ssh -p PORT root@HOST "cd /root/WORKFLOW && python optimize_strategy.py \
  --data-path /root/data --symbol SYMBOL --mode aggressive \
  --prescreen 5000 --paths 200 --engine gpu"

Prescreening with Vectorized Backtests

Phase 0: GPU-accelerated parameter screening:

  • Generate N random parameter combinations
  • Batch evaluate on GPU
  • Filter by minimum trades (30+)
  • Return top K by Sharpe ratio

Performance baseline (RTX 5090, 730d lookback, 250k combos): ~4s per mode.

Full Backtests with NautilusTrader

Phase 1: Event-driven backtesting for top candidates:

  • High-fidelity simulation with realistic execution
  • Slippage and commission modeling
  • Multi-asset portfolio backtests

Parameter Optimization

Optuna for Hyperparameter Search

Phase 2: Bayesian optimization with warm-start from prescreening:

import optuna

study = optuna.create_study(
    direction="maximize",
    sampler=optuna.samplers.TPESampler(seed=42),
    pruner=optuna.pruners.MedianPruner()
)

study.optimize(objective, n_trials=1000)

Grid Search vs Bayesian Optimization

MethodUse Case
Grid SearchSmall parameter space, exhaustive coverage needed
Random SearchLarge space, quick exploration
Bayesian (TPE)Efficient optimization, exploitation/exploration balance
CMA-ESContinuous parameters, smooth objective

Pruning Strategies

  • MedianPruner: Prune if worse than median of completed trials
  • PercentilePruner: Prune bottom X% of trials
  • HyperbandPruner: Multi-fidelity optimization
  • SuccessiveHalvingPruner: Aggressive early stopping

Distributed Optimization

For large-scale runs, use persistent storage:

# JournalStorage for multi-process
storage = optuna.storages.JournalStorage(
    optuna.storages.JournalFileStorage("journal.log")
)

# RDBStorage for distributed clusters
storage = optuna.storages.RDBStorage("postgresql://...")

Walk-Forward Validation

Rolling Window Validation

Slide the training/test window through time:

[Train 1][Test 1]
    [Train 2][Test 2]
        [Train 3][Test 3]

Parameters:

  • train_window: Training period length
  • test_window: Out-of-sample test length
  • step_size: Window advancement increment

Anchored Walk-Forward

Expand training window while sliding test window:

[Train 1      ][Test 1]
[Train 1 + 2      ][Test 2]
[Train 1 + 2 + 3      ][Test 3]

Use when historical regime diversity improves model robustness.

Epoch Selection Criteria

Intelligent selection of training periods:

  • Regime-aware: Match training regimes to expected deployment conditions
  • Volatility-adjusted: Include both high and low volatility periods
  • Event-inclusive: Ensure major market events are represented
  • Recency-weighted: Emphasize recent data while maintaining diversity

Out-of-Sample Testing

Final validation phase:

  • Hold out 20-30% of data for final OOS test
  • No parameter tuning on OOS data
  • Monte Carlo stress testing
  • Regime-conditional performance analysis

SLOs and Guardrails

Utilization Targets

  • CPU utilization target: >= 70%
  • GPU utilization target: >= 70%
  • No silent GPU fallback for GPU sweeps

Hardware Watchdog Hooks

Enforced by:

  • hooks/hardware_capacity_watchdog.py
  • scripts/process_auditor.py

Capacity Monitoring

Control plane loops monitor:

  • Worker health and liveness
  • Progress artifact freshness
  • Resource utilization
  • Job queue depth

Self-healing actions:

  • Automatic worker restart on crash
  • Fill lanes for underutilized resources
  • Cooldown guardrails to prevent thrashing

Tearsheet Generation

Generate QuantStats-style performance reports:

scripts/generate-tearsheet STRATEGY_NAME \
  --trades /path/to/trades.csv \
  --capital 10000 \
  --output ./tearsheets

See tearsheet-generator skill for detailed visualization options.

Multi-Provider Orchestration

PAL MCP Integration

Attach PAL as an MCP server for research/consensus across multiple model providers:

  • Config template: config/mcp/pal.mcp.json.example
  • Docs: docs/reference/PAL_MCP_INTEGRATION.md
  • Providers: OpenRouter, OpenAI, Anthropic, xAI, local models

Resources

Documentation

Project References

  • config/workflow_defaults.yaml - Default configuration
  • config/model_policy.yaml - Model policy (advisory)
  • docs/guides/SWARM_OPTIMIZATION_RUNBOOK.md - Detailed runbook
  • hooks/pipeline-hooks.md - Hook contracts
  • docs/reference/VECTORBT_GRAPH_INGEST.md - VectorBT PRO integration

Results Structure

Backtests/optimizations/{SYMBOL}/{MODE}/
  best_sharpe/
    config.json      # Best Sharpe configuration
    metrics.json     # Performance metrics
  best_returns/
  lowest_drawdown/
  best_winrate/
  all_trials.json    # All Optuna trials
  phase0_top500.json # Prescreening results

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…