Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AutoSignals

v0.1.0

Monitors and controls the AutoSignals autonomous research loop.

0· 100·0 current·0 all-time
byRunByDaVinci@clawdiri-ai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for clawdiri-ai/autosignals-davinci.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AutoSignals" (clawdiri-ai/autosignals-davinci) from ClawHub.
Skill page: https://clawhub.ai/clawdiri-ai/autosignals-davinci
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install autosignals-davinci

ClawHub CLI

Package manager switcher

npx clawhub@latest install autosignals-davinci
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (monitor and control an autonomous AutoSignals loop) matches the instructions: checking status, starting/stopping the loop, viewing logs, and inspecting best signals. The listed files (signals.py, backtest.py, run.py, etc.) align with the described functionality. There are no unrelated credentials or binaries requested.
Instruction Scope
The SKILL.md contains explicit shell commands that access a hard-coded local path (/Users/clawdiri/Projects/autosignals/) and run scripts (start.sh, status.sh, monitor.sh), read files (best_score.json, experiments.jsonl), and run git show on commits. These actions are coherent with a local monitoring skill, but they grant the skill authority to read and execute arbitrary files at that path — review those scripts and code before running. It also references WhatsApp alerts and agent spawning (LLM agents) but provides no configuration details for alerts or agent credentials; this is a descriptive note rather than unexplained access to external services.
Install Mechanism
Instruction-only skill with no install spec and no code files to write to disk. This is the lowest-risk install posture; nothing is downloaded or extracted by the skill itself.
Credentials
The skill declares no required environment variables or credentials. While SKILL.md mentions possible future integrations (WhatsApp, Alpaca, Finnhub), none are required by the current instructions. That is proportionate to the current on-disk monitoring/control role.
Persistence & Privilege
always is false and disable-model-invocation is false (normal). The skill does not request persistent marketplace privileges. The main residual risk is that the instructions ask the agent to execute local scripts and inspect local files — that is expected but means the user should ensure those scripts are trusted.
Assessment
This skill is coherent: it expects to control and monitor a local AutoSignals repository at /Users/clawdiri/Projects/autosignals/. Before using it, manually inspect the referenced directory and scripts (start.sh, status.sh, monitor.sh, run.py, backtest.py, prepare.py, signals.py, and any pid/log files) to ensure they don't perform unexpected network calls, exfiltrate data, or run privileged operations. Because the SKILL.md runs shell commands against your filesystem, run it only on a machine where you trust the project files. If you plan to let the agent operate autonomously, consider: limiting the agent's permissions, running it under an unprivileged user, enabling network egress controls, and adding explicit configuration for alerting endpoints (WhatsApp/Alpaca) so no secrets are stored or used implicitly. If you need this to be generic (not tied to another user's home path), update the paths to point to your repository before invoking any commands.

Like a lobster shell, security has layers — review code before you run it.

latestvk97amvx1mjptmf4dpxajzv2znx83c88w
100downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

AutoSignals - Autonomous Trading Signal Optimization

Monitors and controls the AutoSignals autonomous research loop.

What It Is

AutoSignals is an adaptation of Karpathy's autoresearch pattern for trading signal optimization. An autonomous loop runs continuously, spawning sub-agents to modify signals.py, backtesting changes, and keeping improvements.

Architecture:

  • signals.py — The ONE file agents can modify (factor weights, thresholds, indicators, scoring)
  • backtest.py — Fixed evaluation engine (5-year backtest, composite score metric)
  • prepare.py — Data download (S&P 500 + held tickers)
  • program.md — Instructions for research agents
  • run.py — Autonomous loop controller
  • experiments.jsonl — Full experiment log

Location: /Users/clawdiri/Projects/autosignals/

How to Use

Check Status

bash /Users/clawdiri/Projects/autosignals/status.sh

Shows:

  • Running status (PID, uptime)
  • Best composite score achieved
  • Total experiments run
  • Last 10 experiments with outcomes
  • Score trend (last 20)
  • Any errors

Start the Loop

bash /Users/clawdiri/Projects/autosignals/start.sh

Starts the autonomous loop in the background. Runs forever until stopped.

Stop the Loop

kill $(cat /Users/clawdiri/Projects/autosignals/autosignals.pid)

View Logs

tail -f /Users/clawdiri/Projects/autosignals/logs/autosignals.log

View Best Signals

cat /Users/clawdiri/Projects/autosignals/best_score.json

Then read the corresponding commit:

cd /Users/clawdiri/Projects/autosignals
git show <commit_hash>:signals.py

Monitoring Script (for DaVinci heartbeats)

bash /Users/clawdiri/Projects/autosignals/monitor.sh

Returns JSON with:

  • running: bool
  • experiment_count: int
  • best_score: float
  • best_commit: str
  • trend: "improving" | "declining" | "flat"
  • errors: list of recent errors

Evaluation Metric

composite_score = (0.35 * sharpe_normalized) + 
                  (0.25 * (1 - max_drawdown)) + 
                  (0.20 * win_rate) + 
                  (0.20 * profit_factor_normalized)

All components normalized to [0, 1].

Baseline targets:

  • Sharpe: 1.57 / 1.46 / 1.24
  • Starting weights: 40% Insider / 35% Earnings / 25% Sector Rotation

Good: Beat baseline Great: Sharpe > 2.0, drawdown < 15% Exceptional: Sharpe > 2.5, drawdown < 10%

Data

  • Price data: 5 years daily OHLCV for S&P 500 + META, GOOG, AMZN, TSLA, BTC-USD, IAU
  • Factor data: Currently mock (insider, earnings, sector). Can be enhanced with real API data.
  • Cache: /Users/clawdiri/Projects/autosignals/data/prices.parquet

Refresh data:

cd /Users/clawdiri/Projects/autosignals
source .venv/bin/activate
python prepare.py

Design Principles (from Karpathy)

  1. Single modifiable file — agents only edit signals.py
  2. Fixed evaluationbacktest.py is immutable truth
  3. Self-contained — no external API calls during backtest (cached data only)
  4. Git-tracked progress — every improvement is a commit
  5. Resilient loop — individual failures don't stop the system

Alert Conditions (for DaVinci)

  • Loop stopped unexpectedly → WhatsApp alert
  • No experiments in last 30 minutes (if running) → check logs
  • Error rate > 50% (last 10 experiments) → investigate
  • New best score achieved → celebrate 🎉

When to Intervene

Hands-off:

  • Normal operation (experiments running, mix of keep/discard)
  • Gradual improvement trend
  • Low error rate

Check it out:

  • All experiments failing (agent spawn issues? data corruption?)
  • Score trend declining over 20+ experiments (overfitting? bad hypothesis?)
  • Loop stopped (crash? resource exhaustion?)

Celebrate:

  • New all-time best score
  • Sharpe > 2.0 achieved
  • Major breakthrough (e.g., 10%+ score improvement)

Future Enhancements

  • Real factor data integration (Finnhub insider API, FMP earnings, sector ETF momentum)
  • Multi-ticker portfolio optimization (vs current single-ticker signals)
  • Walk-forward validation (rolling window backtest to prevent overfitting)
  • Ensemble signals (combine multiple top-performing signal variants)
  • Risk-adjusted position sizing (Kelly criterion, volatility targeting)
  • Live paper trading integration (Alpaca API)

Comments

Loading comments...