Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Auto Research Claw

v1.0.0

Automates research by conducting literature searches, running experiments, and generating LaTeX papers from detailed research topics.

0· 66·0 current·0 all-time
byDon A Wright Jr@donwrightdesigns
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (autonomous literature search, experiments, LaTeX output) match the included code: LLM clients, web crawlers, arXiv/OpenAlex/SemanticScholar clients, experiment sandboxes (docker/ssh/subprocess), Overleaf sync, multi-agent orchestration and a CLI. However the skill registry declares no required env vars/config, while the code and SKILL.md clearly expect LLM API keys (OPENAI_API_KEY, GEMINI_API_KEY, MINIMAX_API_KEY, etc.), possible PRM/metaclaw credentials, npm opencode usage, and SSH key paths. The declared metadata under-represents the real capabilities/requirements.
!
Instruction Scope
SKILL.md tells the user to create a venv, pip install -e ., run `researchclaw setup` and `researchclaw run --auto-approve`. Those steps can install dependencies, run setup scripts, enable ‘bridge mode’ to OpenClaw internal tools (sessions, web_fetch, message), and then launch autonomous end-to-end runs that execute arbitrary generated code in local, Docker or remote SSH sandboxes. The runtime instructions therefore grant the skill broad filesystem, network, and remote-execution scope (including opportunities to read user config/SSH keys and to fetch/post data).
Install Mechanism
Registry lists no formal install spec, but SKILL.md instructs the user to pip install the local package and run `researchclaw setup`. The bundle includes hundreds of source files, Docker entrypoints and shell scripts — so following the SKILL.md will write and execute sizable code on the host. No remote download URL in an automated install spec was declared, but the manual pip/setup workflow still results in code being installed and run.
!
Credentials
Declared required env vars in registry = none, yet config examples and README reference many API keys and secrets (OPENAI_API_KEY, GEMINI_API_KEY, PRM_API_KEY, MINIMAX_API_KEY, optional gemini_api_key, opencode npm, SSH key_path ~/.ssh/id_rsa, MetaClaw proxy/fallback_api_key, etc.). The skill legitimately needs at least an LLM API key for full functionality, but the metadata omission is a mismatch. Also the config uses default paths (e.g. ~/.ssh/id_rsa) which, if used, could expose private keys to remote experiment execution — this needs user review/explicit consent.
!
Persistence & Privilege
always:false, so the skill is not force-included, but it is written to run autonomously (disable-model-invocation: false). The codebase contains server/dispatcher modules, Docker/SSH executors, and overleaf/web sync which, when run, will open network connections and may spawn background components. Combined with underdeclared environment needs and the `--auto-approve` run pattern, this raises the blast radius if run without isolation.
Scan Findings in Context
[base64-block] expected: A base64 image/data block was detected (used in README badges). This pattern is commonly benign (embedded images) but the scanner flagged it as a prompt-injection pattern; review any large embedded blocks or decoded payloads before trusting them.
What to consider before installing
This package is feature-rich and appears to implement what it claims, but there are several mismatches and powerful runtime behaviors you should treat carefully: 1) Source provenance: 'Source: unknown' and no homepage — prefer software from a known repo or signed releases. 2) Secrets and keys: it expects LLM API keys and may reference many env vars (OPENAI_API_KEY, GEMINI_API_KEY, PRM_API_KEY, MINIMAX_API_KEY); do not expose more secrets than necessary and do not point it at ~/.ssh/id_rsa unless you understand the consequences. 3) Run in isolation: if you want to try it, install and run it inside a disposable VM/container with no access to your real SSH keys, sensitive files, or organization networks. 4) Inspect setup scripts: open `researchclaw setup`, `sentinel.sh`, and any install hooks before running them; they can install npm packages or Docker images. 5) Disable bridging/autoconnect features initially: set openclaw_bridge.* and metaclaw_bridge.* integrations to false and set opencode.auto=false before any autonomous runs. 6) Avoid --auto-approve until you have audited config and run a dry local test; prefer manual approval and small time/resource budgets. 7) If you need stronger assurance, ask the owner for a canonical repository link, signed release tarball, or a trimmed skill that only exposes the minimal capabilities you require. Finally, if you lack the ability to audit the code, do not run the skill with access to sensitive environments or credentials.
researchclaw/prompts.py:1298
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk9738qa6z2d37zvhdm57v3exbn83wf9z

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments