Research Strategy

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: research-strategy Version: 1.0.0 The skill bundle is classified as suspicious due to explicit prompt injection instructions in SKILL.md that command the AI agent to operate autonomously without user confirmation ('无需等待确认', '直接执行', '循环', '自动继续下一个策略'). This bypasses critical human oversight. Additionally, the `research_workflow.py` script directly modifies Python strategy files via string replacement in its `reverse_logic` function, which is a high-risk operation that could lead to unintended code changes or breakage, even if its stated purpose is to adjust trading logic.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

ConcernHigh Confidence
ASI10: Rogue Agents
What this means

It may keep searching, coding, backtesting, and changing files after the user expected a single bounded task.

Why it was flagged

The skill explicitly tells the agent to spawn a sub-agent that runs autonomously, does not wait for confirmation, and loops through strategy research without a clear stop condition.

Skill content
sessions_spawn(... label="Research Agent") ... "不需要等确认,直接执行" ... "自动继续下一个策略" ... "循环直到所有策略研究完" ... "自主执行,无需等待确认。"
Recommendation

Require explicit user opt-in before spawning sub-agents or background workers, add a finite iteration/time limit, and provide a clear stop command.

What this means

A flawed generated strategy could be promoted into the user's trading project and later be used as if it were validated.

Why it was flagged

The script can automatically move a generated test strategy into the formal strategy directory based only on backtest metrics, with no manual approval or review gate.

Skill content
if decision == 'MOVE_TO_FORMAL':
    if move_to_formal(result['strategy']):
        log(f"✅ 已移到正式文件夹")
...
os.rename(src, dst)
Recommendation

Require manual review before promoting strategies, create backups, validate paths, and keep generated strategies isolated until explicitly approved.

ConcernHigh Confidence
ASI05: Unexpected Code Execution
What this means

Unreviewed local code may run in the user's trading workspace and continue consuming resources or changing project state in the background.

Why it was flagged

The workflow directs background execution of local Python scripts, including backtests over newly created strategy code, while the same document says confirmation is not needed.

Skill content
python3 BackTest_Research-strategy.py &
...
python3 research_workflow.py &
Recommendation

Run generated strategy code only in a sandbox, avoid background execution by default, and ask for confirmation before each execution step.

ConcernHigh Confidence
ASI08: Cascading Failures
What this means

Bad report data or a faulty generated strategy can propagate into code files and persistent project memory without containment.

Why it was flagged

A single backtest report can trigger multiple persistent changes: promoting a strategy, rewriting its logic, and recording the result into memory.

Skill content
if decision == 'MOVE_TO_FORMAL':
    ... move_to_formal(result['strategy'])
elif decision == 'REVERSE_LOGIC':
    ... reverse_logic(result['strategy'])
# 记录
record_to_memory(result['strategy'], result, decision, reason)
Recommendation

Validate reports, stage changes for review, add rollback support, and separate experimental outputs from formal project state.

What this means

Incorrect or crafted report content could become persistent context that future agent runs may trust.

Why it was flagged

The script appends report-derived strategy information directly into MEMORY.md, a persistent context file, without sanitizing or requiring review.

Skill content
entry = f"""

## {strategy_name}({timestamp})
...
"""
...
with open(MEMORY_FILE, 'a') as f:
    f.write(entry)
Recommendation

Treat MEMORY.md updates as reviewable data, sanitize report-derived text, and avoid storing executable instructions or untrusted content in persistent memory.