Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Einstein Research — Edge Candidate Generator

v0.1.0

Generate and prioritize US equity long-side edge research tickets from EOD observations, then export pipeline-ready candidate specs for trade-strategy-pipeli...

0· 81·0 current·0 all-time
byRunByDaVinci@clawdiri-ai
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The repository's code and README implement auto-detection, ticket structuring, validation, and export to a pipeline spec, which aligns with the skill description. However, SKILL.md examples use an 'edge-generator' CLI (create/prioritize/export) that does not exist in the provided files; the actual scripts are named auto_detect_candidates.py, export_candidate.py, validate_candidate.py, etc. That naming mismatch is an inconsistency the user should notice — it could be harmless (documentation drift) or indicate the skill expects a wrapper/alias not bundled here.
!
Instruction Scope
SKILL.md instructs running an 'edge-generator' CLI which is not present in the code bundle; an agent following the SKILL.md could attempt to run commands that don't exist. The actual scripts do perform file IO (read/write tickets and strategy dirs) and call subprocess.run to invoke external commands (LLM CLI via --llm-ideas-cmd and the pipeline validator via 'uv run' or similar). Those subprocess calls will execute arbitrary user-provided commands and parse their stdout — this is expected for integration but expands the runtime scope and risk if untrusted command strings are provided.
Install Mechanism
There is no install spec (instruction-only from registry perspective) which reduces supply-chain risk. The README suggests Python dependencies (PyYAML, pandas). Because no packaging/install steps are provided, a user or agent must install Python deps and invoke scripts directly; this is reasonable but means the agent will rely on local environment state that may vary.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportional to the stated purpose. Caveat: several scripts accept/execute external commands (LLM CLI and 'uv' for pipeline validation) and will run whatever command the user supplies; this does not require secrets by default, but it does create an execution surface for arbitrary commands if misconfigured.
Persistence & Privilege
The skill is not always-included and does not request privileged persistence. It writes candidate artifacts to user-specified directories (strategies/, tickets/), which is normal for this functionality and limited in scope. It does not attempt to modify other skills or system-wide agent configuration.
What to consider before installing
What to check before installing or running this skill: - Confirm the CLI you expect: SKILL.md shows an 'edge-generator' CLI (create/prioritize/export) but the package provides Python scripts (auto_detect_candidates.py, export_candidate.py, validate_candidate.py). Either the README/skill documentation is out of date or a wrapper is missing. Do not run commands shown in SKILL.md unless you verify an appropriate executable is present. - Inspect subprocess usage: auto_detect_candidates.py and validate_candidate.py call subprocess.run to execute external programs (LLM CLI commands and the pipeline 'uv' validator). Only supply trusted command strings and run these scripts in an isolated environment, because subprocess.run will execute whatever command is passed. - Run tests and lint locally in a sandbox: the bundle includes unit tests; run them in a disposable environment (virtualenv/container) to validate behavior before using on real data. - Review I/O paths: the scripts read/write tickets, strategies, and metadata under folders like tickets/, strategies/, and any pipeline-root you provide. Ensure the output paths are safe and not shared with sensitive repos or production directories. - Dependency installation: install Python deps (PyYAML, pandas) in a controlled environment rather than system-wide. - If you want to use this skill with an agent, either: - create or verify a wrapper 'edge-generator' that maps to these scripts, or - update the SKILL.md to call the actual scripts, and ensure the agent will not be given arbitrary command strings to run. - If you lack the ability to audit code, avoid giving this skill autonomous invocation in sensitive contexts and do not provide untrusted commands for the LLM CLI or pipeline validator. If you want, I can list the exact places in the code where subprocess.run is used and the arguments it accepts so you can review them specifically.

Like a lobster shell, security has layers — review code before you run it.

latestvk974pfev7bd7mvbag5rqryjxrd83cfmz
81downloads
0stars
1versions
Updated 3w ago
v0.1.0
MIT-0

Edge Research Ticket Generator

This skill formalizes the process of turning a trading hypothesis or anomaly into a structured, reproducible research ticket. It's the first step in the quantitative research pipeline, ensuring that ideas are well-defined and testable before any backtesting code is written.

When to Use This Skill

  • User has a trading idea or hypothesis (e.g., "I think stocks that do X tend to go up").
  • User observes a market anomaly and wants to investigate it systematically.
  • User wants to create a new candidate for the trade-strategy-pipeline.
  • Triggers: "research ticket," "new strategy idea," "test this hypothesis," "is this an edge?".

Workflow: From Idea to Pipeline-Ready Spec

Step 1: Idea Ingestion

The skill prompts the user for the core components of their idea:

  • Hypothesis: A clear, one-sentence statement of the proposed edge.
  • Entry Signal: The specific conditions that trigger a buy.
  • Exit Signal: The conditions that trigger a sell (e.g., target profit, stop-loss, time-based).
  • Universe: The group of stocks to test this on (e.g., S&P 500, Nasdaq 100).
  • Rationale: Why should this edge exist? (Behavioral, structural, etc.).

Step 2: Ticket Generation

The edge-generator CLI tool takes these inputs and creates a structured research ticket in Markdown format.

edge-generator create \
  --hypothesis "Stocks hitting a 52-week high with high volume have momentum." \
  --entry "Price > 52-week high AND Volume > 2x 50-day avg volume" \
  --exit "5-day hold OR 10% profit target OR 5% stop-loss" \
  --universe "sp500" \
  --rationale "Breakout momentum, high volume confirms institutional interest."

This generates a file like tickets/ER-2026-015_52_week_high_momentum.md.

Ticket Structure:

  • ID: ER-YYYY-NNN
  • Title: Short description of the idea.
  • Hypothesis: As provided.
  • Entry/Exit/Universe/Rationale: As provided.
  • Data Requirements: Lists the data needed (e.g., daily OHLCV, 52-week high, 50-day avg volume).
  • Priority Score: An initial score (0-100) based on uniqueness, rationale strength, and testability.

Step 3: Prioritization

The skill can rank all open tickets in the tickets/ directory to help decide what to research next.

edge-generator prioritize

This updates the priority scores based on factors like:

  • Novelty: How similar is this to previously tested (and failed) ideas?
  • Data Availability: Can this be tested with our current data sources?
  • Computational Cost: Is the backtest likely to be fast or slow?

Step 4: Export to Pipeline Spec

Once a ticket is prioritized and approved for research, this skill exports it to the format required by the trade-strategy-pipeline.

edge-generator export ER-2026-015

This creates a directory pipeline-candidates/ER-2026-015/ containing:

  • strategy.yaml: The machine-readable definition of the strategy.
    version: edge-finder-candidate/v1
    name: 52-Week High Momentum
    hypothesis: Stocks hitting a 52-week high with high volume have momentum.
    entry:
      - "price > high_52w"
      - "volume > 2 * avg_volume_50d"
    exit:
      - "hold_days == 5"
      - "pct_change >= 0.10"
      - "pct_change <= -0.05"
    universe: "sp500"
    
  • metadata.json: Additional context for the pipeline runner.
    {
      "ticketId": "ER-2026-015",
      "rationale": "Breakout momentum, high volume confirms institutional interest.",
      "priority": 85,
      "dataRequirements": ["daily_ohlcv", "high_52w", "avg_volume_50d"]
    }
    

Step 5: Handoff to Backtest Engine

The generated directory is now ready to be processed by the einstein-research-backtest-engine skill, which will execute the backtest based on the strategy.yaml spec.

Why This Is Important

  • Reproducibility: Every research effort starts with a formal, version-controlled definition.
  • Efficiency: Prevents wasted time on ill-defined ideas.
  • Systematic Process: Ensures a consistent and rigorous approach to alpha research.
  • Automation: The strategy.yaml format allows the backtesting process to be fully automated.

This skill is the gateway to the entire quantitative research pipeline, turning qualitative ideas into testable, machine-readable artifacts.

Comments

Loading comments...