Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Literature Reviewer Skill

v3.0.0

根据用户提供的论文主题,进行系统性中英文文献回顾(Literature Survey)。 采用8阶段工作流,支持CNKI、Web of Science、ScienceDirect等主流数据库, 无需API配置,通过浏览器自动化获取文献信息。 输出包含GB/T 7714-2015引文、标题、摘要的Markdown文...

0· 392·3 current·3 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to perform browser-automated searches across CNKI, WOS, ScienceDirect, etc., which is consistent with the included agent templates and JS scraping snippets. However the registry metadata declares no required skills/dependencies or install spec while AGENTS.md and SKILL.md explicitly rely on a 'browser' skill (Playwright) and optionally a 'docx' skill — this dependency is not declared in the manifest. README also contains natural-language installer instructions that tell an AI to git-clone an external GitHub repo into local skills directories (modifying the agent environment). Those mismatches between claimed zero-configuration and the actual need to access browser automation and possibly modify skills directories are incoherent and worth flagging.
!
Instruction Scope
SKILL.md and agent templates instruct the agent to drive browsers (browser_navigate, browser_fill_form, browser_evaluate), extract DOM, open detail pages, and batch-download PDFs. They also include JS that interacts with CNKI selectors and code to open new windows and click download buttons. This stays within the declared functional scope (web scraping for literature) but grants broad runtime actions: network access to many external domains, interaction with whatever browser session/cookies exist (potentially using logged-in credentials), and writing many session files to disk. README's suggested 'natural language install' (have AI clone a GitHub repo into skills directories) would allow the agent to fetch and execute external code. The instructions also advise strategies to evade rate limits (longer intervals) and batch downloads — which increases risk of violating site policies or triggering captchas. Overall the runtime scope is broad and not fully limited to the stated goal.
!
Install Mechanism
The skill package as provided has no formal install spec (lowest-risk), but the README contains explicit 'natural language install' instructions telling an AI to git clone a public GitHub repo into the host tool's skills directory. That encourages fetching & installing external code at runtime. Although the packaged files appear to include the relevant scripts, the presence of cloning instructions and references to external repos is a potential supply-chain risk: cloning arbitrary repositories at the agent's direction can introduce unreviewed code. No signed release URLs or vetted package manager entries are used. The included JS/Python snippets in SKILL.md are plain (no obfuscation), but the install guidance to modify skills directories raises concern.
!
Credentials
The skill declares no required env vars or credentials, which is appropriate for a scraper-focused skill. However it implicitly depends on an available browser automation environment and on browser sessions/cookies for access to some sites; that means the agent will effectively act with whatever credentials/cookies the browser profile exposes (e.g., institutional access, personal login cookies). The README/installation guidance also instructs cloning into various tool-specific skills directories (Kimi, MaxClaw, Claude Code, etc.), which would require write access to those paths. These implicit requirements are not surfaced in the manifest and could expose unrelated credentials or alter other skills — a disproportionate, under-declared scope.
!
Persistence & Privilege
The skill is not marked always:true and doesn't request elevated platform flags. However the documentation and README encourage cloning the repo into the host's skills directory and restarting the AI tool, which is an instruction to persistently modify the agent installation. The runtime behavior also writes session directories and checkpoint files under sessions/{session_id}/ — that is reasonable for a workflow, but combined with instructions that modify the skills directory, it implies the skill could change the agent environment. This persistence/installation behavior is not explicitly declared in registry metadata and should be treated cautiously.
Scan Findings in Context
[no_regex_findings] expected: Pre-scan injection signals: none detected. Absence of regex hits does not imply safety — the SKILL.md contains explicit browser-automation JS and instructions to clone external repos, which are behavioral risks not always caught by pattern scanners.
What to consider before installing
This skill appears to implement what it claims (browser-driven multi-database literature search and synthesis), but there are several things to watch out for before installing or running it: - Do not let an assistant auto-run the README's 'natural language install' instruction that tells it to git-clone a GitHub repo into your skills directory. That would let the agent fetch and install external code without manual review. - The skill relies on browser automation and will act using the active browser context — it can access pages that your browser is logged into (cookies, institutional access). If you run it, be aware it may see session cookies or initiate downloads tied to your accounts. - The workflow writes session files and may batch-download PDFs; consider running it in a sandboxed environment and verify file paths it uses (sessions/...). Limit where it can write and review files afterward. - The skill's network behavior (scraping multiple publishers, batch downloads, opening many pages) can trigger captchas or violate website terms; expect pauses and manual captcha handling as described. - Recommended precautions: manually review the included scripts (scripts/*.py and the JS snippets) before running; avoid executing any automatic 'git clone' or install commands suggested by README; run first in an isolated VM or container; disable automatic skill-install actions by the agent and provide explicit consent for any network or filesystem operations; confirm you are allowed to scrape the target sites and respect rate limits. If you want, I can list the specific files (scripts/*.py and the JS snippets) that you should inspect first and point out what to look for line-by-line.

Like a lobster shell, security has layers — review code before you run it.

latestvk979wzw0119ng6a1mat8rp6t7h82jpwb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments