Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research v7

v7.0.0

深度研究技能,用于进行领域调研、文献调研、survey研究。当用户说"做一个survey"、"深度研究一下XX"、"文献调研"、"研究一下XX领域的最新进展"、"帮我调研XX"、"学术调研"时自动触发。支持arXiv、PubMed、PMC、Google Scholar等多个数据源,自动下载PDF并解析全文,生成三...

0· 46·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The declared purpose (literature/survey research, PDF download, parsing, report generation) matches the included scripts, but the package also lists and appears to expect LLM calls (openai) and external-search integrations while declaring no required environment variables or credentials. Several files import or read from absolute paths under /root/.openclaw/workspace/research-claw/... which implies the skill expects/attempts to access other workspace resources outside its own directory. That cross-path access is not explained by the SKILL.md and seems disproportionate to the stated simple 'survey' trigger.
!
Instruction Scope
SKILL.md instructs running many scripts that fetch web pages, download PDFs, and parse full text; integration docs and scripts explicitly reference cookie usage and 'bypass login' techniques and batch web scraping. The runtime instructions reference files like v9_papers_filtered.json and write into /root/.openclaw/workspace paths. The instructions therefore direct access to arbitrary URLs, potential paywalled content (with cookie hints), and local workspace files outside the skill - broader scope than the manifest declares.
Install Mechanism
There is no install spec (instruction-only), which lowers supply-chain risk. However, the bundle includes many executable Python scripts that will be run directly. No external archives or remote installers are fetched at install time, but runtime actions can perform network IO and file writes.
!
Credentials
clawhub.yaml lists dependencies including 'openai' and the SKILL.md / scripts call tools that likely invoke LLMs, but the skill declares no required env vars (e.g., OPENAI_API_KEY) or credentials (Semantic Scholar, APIs, or cookie tokens). Scripts and docs reference using cookies to access protected sites, yet no credential inputs are declared. Asking for web scraping and LLM access while not declaring needed secrets is an incoherence and operational risk (credentials might be requested or used ad-hoc).
!
Persistence & Privilege
The skill is not set always:true (good), but many scripts perform file reads/writes to absolute agent workspace locations (/root/.openclaw/workspace/...), and they insert that workspace path onto sys.path to import ResearchTools from a different project. That indicates the skill will access other skills' or global workspace files and could read or modify files outside its own scope. This cross-skill/workspace access is a privilege concern.
What to consider before installing
What to consider before installing/running this skill: - Review the code before running: the skill contains many executable Python scripts that will perform arbitrary web fetches, PDF downloads, parsing, and call a ResearchTools bridge. Inspect scripts/research_claw_bridge.py and cookie-management code to see what remote endpoints and credentials they use and whether any data is transmitted to unexpected hosts. - Credentials and API keys: the bundle depends on 'openai' and mentions LLM usage but does not declare required environment variables (e.g., OPENAI_API_KEY) or other API credentials. If you run it, supply only minimal, scoped keys in an isolated environment; do not expose high-privilege keys on your main machine. - Network & paywalled content: INTEGRATION.md and scripts mention using cookies to bypass logins and scraping paywalled content. Decide whether you are comfortable with that behavior and confirm it complies with terms of service and local laws. - Isolation: run the skill in a sandbox or disposable environment (container/VM) with no access to sensitive host files. Pay attention to paths the scripts write to (they use /root/.openclaw/workspace/... by default). Change output directories to a safe location before running. - Cross-skill imports: scripts insert absolute workspace paths and import code outside the skill; ensure those paths are expected and review the referenced external code. That import pattern can allow the skill to read other skills' data. - If you need to use it: 1) open and read scripts/research_claw_bridge.py and cookie_manager.py first; 2) search for any 'requests.post' or custom endpoints within the code to see where data is sent; 3) run with limited permissions and network monitoring; 4) prefer providing API keys via temporary/scoped credentials. Given these mismatches (undeclared credential needs, absolute workspace access, and scraping-with-cookie hints), treat the skill as suspicious until you verify the code and runtime behavior in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ef2ymc92d80ndjyqqdjx2p983qfak

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments