Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Auto Research Pipeline

v1.0.0

OpenClaw 原生的自动化研究 pipeline。从一个研究 topic 出发,经过 23 个 stage 产出完整论文。 每个 Phase 由独立 sub-agent 执行(context 隔离),Phase 间通过文件系统传递产出。 触发词:Research X、跑研究、文献调研、写论文、研究 pipel...

0· 45·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description align with included artifacts: prompt templates, domain definitions, and two helper scripts (literature_search.py and pipeline_state.py) are appropriate for an automated research pipeline.
!
Instruction Scope
Runtime instructions authorize spawning sub-agents, running the included Python scripts, performing web_search/web_fetch calls, and executing LLM-generated experiment code. These actions are consistent with the stated purpose, but the pipeline depends on external tools (memory_search, web_search, web_fetch, sessions_spawn) and on enforcing a no-network sandbox for experiment execution — the SKILL.md asserts these constraints but provides no mechanism to enforce them. It also repeatedly instructs pushing Feishu (飞书) notifications even though no Feishu config/credentials are declared.
Install Mechanism
No install spec; skill is instruction-plus-scripts only. No remote downloads or package installs are requested, which keeps disk/write footprint limited to the included files and produced artifacts under ~/.openclaw/workspace.
!
Credentials
The skill requests no environment variables or credentials, yet its instructions reference sending notifications to Feishu and optionally using Semantic Scholar API with an API key. Those notification and API behaviours require tokens/config which are not declared. Also the pipeline writes artifacts to the user's home (~/.openclaw), which is expected but notable. Overall, credentials and configuration needs are under-specified relative to the described runtime actions.
Persistence & Privilege
always:false (normal). The skill writes state and artifact files under ~/.openclaw/workspace/auto-research — confined to its own workspace. It spawns sub-agents (normal for this platform) but does not request system-wide modifications or other skills' credentials. The main concern is the ability to execute arbitrary LLM-generated code during experiment stages, which increases blast radius if sandboxing or network restrictions are not enforced by the runtime environment.
What to consider before installing
This skill is coherent for automating a research workflow, but proceed cautiously. Key things to check before installing or running: - Notifications: SKILL.md mentions pushing Feishu (飞书) messages but provides no Feishu token/config. Decide where notifications will go and supply credentials only if you trust that endpoint. - Generated-code execution: The pipeline asks the LLM to generate experiment code and then executes it. Ensure your execution environment actually enforces the promised sandbox (no network, restricted file writes, timeouts). If the platform cannot guarantee sandboxing, do not run the experiment-execution stages. - Network access: literature_search.py performs HTTP requests (arXiv, Semantic Scholar). Confirm you are comfortable with those outbound requests (rate limits, data leaving your environment). Semantic Scholar API keys are optional in code but not declared; if you supply a key, provide it securely and only if needed. - Data residency & secrets: artifacts are stored under ~/.openclaw/workspace/auto-research/. If you have sensitive files or tokens on the same filesystem, verify file permissions and isolation. - Unspecified tools: SKILL.md expects platform tools (memory_search, web_search/web_fetch, sessions_spawn). Understand what those tools send/receive and whether they transmit your prompts or files externally. If you decide to use it: run initial tests in an isolated environment (throwaway account or VM), disable network at the runtime layer if possible, and inspect any generated experiment.py before allowing execution.

Like a lobster shell, security has layers — review code before you run it.

latestvk971m9naty3bf9z7434e70xxcd83pv9epaper-writingvk971m9naty3bf9z7434e70xxcd83pv9epipelinevk971m9naty3bf9z7434e70xxcd83pv9eresearchvk971m9naty3bf9z7434e70xxcd83pv9e

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments