Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

X2strategy

v0.1.1

ALAGENT X2Strategy: any research input (PDF paper, Markdown draft, DOCX report, text notes, or keyword search) → structured strategy specification → executab...

1· 51·0 current·0 all-time
byALAGENT-HKU@patrick-lew

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for patrick-lew/x2strategy.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "X2strategy" (patrick-lew/x2strategy) from ClawHub.
Skill page: https://clawhub.ai/patrick-lew/x2strategy
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install x2strategy

ClawHub CLI

Package manager switcher

npx clawhub@latest install x2strategy
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The repository contains a full paper2spec + spec2code pipeline (parsers, extractor, codegen, validator, backtester) which is coherent with the skill description. However the registry metadata claims no required environment variables or binaries while SKILL.md and README explicitly require an LLM API key (DEEPSEEK/OPENROUTER/OPENAI), a Python environment, and optional heavy extras (FAISS, sentence-transformers, backtrader, yfinance, akshare). That mismatch between declared requirements and the runtime instructions is an inconsistency to be aware of.
!
Instruction Scope
Runtime instructions ask the agent to: 1) prompt the user for an LLM API key if not found, 2) persist configuration and the key to a local .env (gitignored), 3) run parsing/extraction/codegen/validation/backtests (including subprocess execution of generated strategy code), and 4) auto-activate for finance papers. The pipeline will execute generated Python/backtest code as subprocesses and will scan/scan directories for metadata. Executing generated code and storing API keys are higher-risk behaviors and should be done in an isolated environment.
Install Mechanism
No formal install spec in registry, but README/SKILL.md instructs users to run 'uv sync' or 'pip install -e "[...]
!
Credentials
The skill needs an LLM API key to function (expected for LLM-driven extraction). However the registry declared zero required env vars while SKILL.md instructs checking for and writing DEEPSEEK_API_KEY, OPENROUTER_API_KEY or OPENAI_API_KEY into a local .env. Persisting API keys to disk (even if .env is gitignored) increases the risk of secrets at rest. The requested env access is otherwise proportional (no unrelated cloud credentials requested), but the omission from the metadata is a red flag.
Persistence & Privilege
The skill is not marked 'always:true' and uses normal autonomous invocation. It does persist configuration (workspace path, selected model/provider and API key) to a .env file in the skill workspace and scans the chosen library directory for metadata.json. Persisting settings and keys is expected for tooling but increases the attack surface (secrets on disk). Also SKILL.md recommends the agent auto-activates on any finance paper input — a broad trigger that could cause the skill to run without an explicit 'implement' request.
What to consider before installing
What to check before installing or running this skill: - Source verification: confirm the repository origin and maintainers (the skill metadata shows 'source: unknown' and README references multiple GitHub org/user names). Only install from a trusted, verifiable repo. - Secrets handling: the skill will ask for an LLM API key and write it into a local .env (gitignored). Prefer using a limited-scope or ephemeral API key, and store it in a secure secret store rather than a plaintext file where possible. - Sandbox execution: the skill generates Python strategy code and runs backtests as subprocesses. Run it in an isolated environment (container / VM / dedicated venv) with no access to production data or credentials. - Inspect code that executes subprocesses: review scripts/ (analyze.py, validate_strategy.py, backtest execution paths) to ensure no unexpected network or shell commands are executed beyond intended backtests and downloads (data pulls like yfinance/akshare/benchmarks are expected for this domain). - Dependency footprint: the agent may install heavy packages (FAISS, sentence-transformers). Be prepared for large downloads and resource usage; only enable the 'agent' extras if you need long-paper FAISS retrieval. - Metadata mismatch: registry lists no required env vars but SKILL.md requires LLM keys — treat the registry fields as incomplete. Ask the maintainer (or verify repo) for an authoritative requirements list and changelog. If you want higher confidence: obtain the source repo URL and a commit/tag to verify, have the skill run in an isolated environment first, and/or request the maintainer to update registry metadata to list required env vars and install steps. If you lack the ability to sandbox, avoid supplying high-privilege or long-lived keys.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b5mqscwvd08ccaa9p04d2nn85kkd9
51downloads
1stars
2versions
Updated 2d ago
v0.1.1
MIT-0

X2Strategy

Any research input → Strategy spec → Executable code → Backtest → Diagnosis.

Capabilities

CapabilityWhat it doesDeep dive
paper2specAny document (PDF/MD/DOCX/TXT) → structured strategy specificationreferences/paper2spec.md
spec2codeStrategy spec → Backtrader code → validate → backtest → diagnosisreferences/spec2code.md

Input format auto-detected from extension:

FormatExtensionNotes
PDF (papers).pdfPyMuPDF → Mode A (direct) or Mode B (FAISS)
Markdown (drafts).md, .markdownDirect text read
DOCX (reports).docxpython-docx (requires uv sync --extra docx)
Plain text.txtDirect read

Interaction Principles

You are the executor. The user is the requester.

  • Run tools silently, present results and insights in natural language.
  • Never show CLI commands (uv run python scripts/...) unless user asks.
  • Offer next actions conversationally: "Would you like me to implement the second strategy as well?"

When reporting results, focus on what you found, not how:

❌ Bad:  "I ran `uv run python scripts/analyze.py paper.pdf` and got 3 strategies."
✅ Good: "This paper contains 3 independent strategies: [1] minimum distance method, [2] ADF stationarity, and [3] Johansen cointegration. Which one should I implement?"

Use interactive tools aggressively. When your platform provides interactive question tools — vscode_askQuestions (VS Code Copilot), AskUserQuestion (Claude Code), or equivalent — use them for ALL user-facing choices. Interactive tools present clickable options, which is faster and less error-prone than asking the user to type.

Apply interactive tools to:

  • First-Run Setup choices (workspace path, API provider, key input)
  • Gate 1 confirmation (proceed / adjust settings)
  • Gate 2 action menu (implement / deep dive / compare / adjust / export / re-extract)
  • Search result selection (pick papers from a numbered list)
  • Any scenario where the user picks from options

If no interactive tool is available, fall back to numbered text menus.


First-Run Setup

On first use, walk through three steps. Skip any already-configured step. Persist all choices to .env (gitignored) for session stability.

Step 1 — Workspace Location

Present choice via interactive tool:

  • ./library/ (default, recommended)
  • Custom path

Write PAPER2SPEC_LIBRARY_PATH=/absolute/path to .env. Scan the directory for existing metadata.json to detect prior analyses.

Step 2 — LLM API Key

Check env for DEEPSEEK_API_KEY, OPENROUTER_API_KEY, OPENAI_API_KEY. If none found, present via interactive tool:

An LLM API key is required for strategy extraction and code generation. Recommended options:
  1. DeepSeek (best cost-performance, about ¥0.7 per paper) → https://platform.deepseek.com
  2. OpenRouter (one key for access to multiple models) → https://openrouter.ai/keys
Please provide your API key and tell me which provider it belongs to.

Do NOT check for or suggest ANTHROPIC_API_KEY.

Once received, write key + matching model to .env, then verify: uv run python -c "from paper2spec.llm import chat; print(chat('Say OK'))".

See references/skill-internals.md for .env format examples per provider.

Step 3 — Python Environment

cd <skill-path>
uv sync --all-extras    # Recommended: installs everything

If uv unavailable: pip install -e ".[codegen,agent,dev]". Always use uv run to execute scripts (auto-activates correct venv).

See references/skill-internals.md for selective install options and non-uv alternatives.

Completion

Once configured, confirm naturally with examples:

✅ Setup complete. You can now ask me for tasks directly, for example:

  • "Analyze this paper" + attach a PDF file
  • "Search for papers about momentum trading"
  • "Implement this strategy based on this paper" + provide the file path
  • "I wrote a strategy draft in Markdown; extract the spec and generate code"
  • "Compare the strategy differences between these two papers"

Just tell me what you want to do, and I will handle the rest.

Routing

User IntentRouteAction
"Analyze this paper/doc"paper2specParse + extract specs
"Search for papers about X"paper2specSearch → Gate 1
"Here's my strategy draft" (MD/DOCX/TXT)paper2specAuto-detect format, extract
"Generate code / Implement this"spec2codeSpec → code → validate → backtest
"Run a backtest"spec2codeExecute strategy.py
"End to end from paper"bothpaper2spec → Gate 2 → spec2code
"Compare results with paper"spec2codeRead backtest output + spec, compare metrics

Interaction Gates

Two mandatory HITL gates. Skip only when user says "fully automatic" / "end to end without stopping".

Always present gate choices through interactive tools when available.

Gate 1 — Input Confirmation

When: After receiving/finding input, BEFORE extraction.

Three scenarios — present via interactive tool (or numbered text menu):

Scenario A — User provided a file:

📄 Received: "Tactical Asset Allocation" (Faber, 2007)
   Format: PDF, 18 pages
   Abstract: [first 2 sentences]

I'll extract trading strategies. ~30-60s, ~$0.01.
→ Proceed with extraction?
→ Or adjust settings first? (parser mode, model, output location)

Scenario B — Search results returned:

🔍 Found 8 papers for "momentum trading strategy":
  1. ⭐ "Time Series Momentum" (Moskowitz et al., 2012) — 847 citations
  2. "Momentum Crashes" (Daniel & Moskowitz, 2016) — 523 citations
  ...
Which paper to analyze? (pick number, "1, 3" for multiple, or refine search)

Do NOT auto-analyze. Always let user pick.

Scenario C — Raw text / strategy idea:

📝 I see you've described: "[brief summary]"
   I'll structure this into a formal spec. → Proceed? → Add more details first?

Keep it light for straightforward inputs — single confirm with default-proceed.

Gate 2 — Spec Review & Action Menu

When: After extraction completes, BEFORE code generation.

Show extraction summary, then present action menu via interactive tool:

✅ Strategy Extraction Complete
📋 Paper: "Pairs Trading: Does Volatility Timing Matter?"
   Detected: 3 independent strategies

   [1] Minimum Distance Method
       • 4 indicators (spread, SMA, Z-score, distance)
       • Entry: spread Z-score > 2σ, Exit: mean reversion

   [2] Stationarity-Based (ADF Test)
       • 3 indicators, Entry: cointegrated pair + spread deviation

   [3] Cointegration (Johansen)
       • 5 indicators, Entry: Johansen test + Z-score threshold

Then 6 actions:

  1. 🚀 Implement — Generate executable code (pick strategy # or "all")
  2. 🔍 Deep dive — Explain a strategy's logic in detail
  3. 📊 Compare — Side-by-side of detected strategies
  4. ✏️ Adjust — Modify spec parameters/constraints
  5. 💾 Export only — Save specs, stop here
  6. 🔄 Re-extract — Different model or parser mode

Key behaviors:

  • "Implement" → confirm which strategy index before generating code.
  • "Deep dive" → explain, then return to the same menu.
  • After code gen + backtest → present results, offer next decision.
  • Never silently chain extraction → code generation.

Gate Bypass

If user says "end to end" / "fully automatic" / "don't stop", collapse gates into inline status:

📄 Parsing paper... ✓ (3 strategies detected)
💻 Generating code for strategy 1... ✓
📊 Running backtest... ✓
📈 Results ready — see below.

Still stop on unexpected issues (0 strategies, errors, validation failures).


Agent Pipeline Flow

1. Receive input (file / search query / text)
2. ── Gate 1: Input Confirmation ──
3. [paper2spec] Parse document, extract strategy specs
4. ── Gate 2: Spec Review & Action Menu ──
5. User selects strategy + action
6. [spec2code] For each selected strategy:
   a. Read spec.json + reference docs
   b. Generate self-contained Backtrader strategy.py
   c. Validate (AST + structural checks)
   d. Run backtest, compare metrics vs paper
7. Present results + diagnosis
8. Offer next actions

For code generation patterns: references/spec2code.md For Backtrader patterns: references/backtrader_patterns.md


Internal Toolchain

Agent-only. Run silently; present results in natural language.

# End-to-end: any document → spec
uv run python scripts/analyze.py <file> -o library/<slug>/

# Validate generated code
uv run python scripts/validate_strategy.py library/<slug>/strategy_1.py

# Run backtest
uv run python library/<slug>/strategy_1.py

# Search papers
uv run python scripts/search.py "<query>" -n 5

# Step-by-step
uv run python scripts/parse.py <file> -o content.json
uv run python scripts/extract.py content.json -o spec.json

For full flags, output formats, and library management: references/skill-internals.md


Configuration

VariableDefaultPurpose
PAPER2SPEC_LIBRARY_PATH./libraryOutput root
PAPER2SPEC_MODELopenai/gpt-4o-miniDefault LLM (litellm-supported)
DEEPSEEK_API_KEYDeepSeek (recommended)
OPENROUTER_API_KEYOpenRouter (multi-model)
OPENAI_API_KEYOpenAI direct
PAPER2SPEC_ARXIV_MIN_INTERVAL3.0Seconds between arXiv requests
PAPER2SPEC_SEARCH_MAX_RETRIES3Retry on HTTP 429/5xx

Any litellm-supported model works. The --model flag on any script overrides PAPER2SPEC_MODEL. Full config + .env examples: references/skill-internals.md


References

Read on demand for implementation details:

Limitations

  • Mode A truncates at 100K chars (first 90K + last 10K). Use Mode B for >100 page papers.
  • Tables/formulas: not yet extracted from PDFs.
  • Multi-strategy: conservative — may merge borderline-distinct strategies.
  • DOCX: paragraph text only (tables, images not preserved — use PDF for rich docs).
  • SSRN search: best-effort HTML scraping, may break on layout changes.

Comments

Loading comments...