Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lobster Crawler Skill

v0.7.0

定向抓取 Webnovel/ReelShorts 等站点的书籍/短剧内容,支持内容分级与钉钉播报。

0· 142·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description (crawler + DingTalk broadcast) align with most code (scrapy spiders, RSS, broadcast module). However, the declared required binary 'uv' and the install block are incoherent (install declares kind: uv with package 'curl_cffi' and bins: ['uv'] — installing curl_cffi would not create an 'uv' binary). The repo also bundles an LLM loop (scripts/claude_loop.sh, prompts/claude_loop_prompt.txt, CLAUDE.md) which is not strictly necessary for a crawler; that increases the runtime footprint beyond the stated simple crawler+broadcast purpose.
!
Instruction Scope
SKILL.md runtime instructions focus on using 'uv run' to run the CLI (crawl/list/status/broadcast/rss) which is coherent. But repository docs (CLAUDE.md, agent.md, scripts/claude_loop.sh and prompts) instruct an agent to run continuous LLM loops, to read and update repo docs and to persist project memory into ~/.claude/projects/... — that asks the agent to write to a global home-path and to run an external 'claude' binary. Those behaviours (writing to user home, running an LLM loop) go beyond crawling and are not declared in SKILL.md.
!
Install Mechanism
Declared install block is inconsistent: kind: uv package: 'curl_cffi' bins: ['uv'] — this does not make sense (curl_cffi is a Python library, not an installer that yields an 'uv' binary). The SKILL.md uses 'uv venv' and 'uv run', implying a dependency on a tool named 'uv' but the install metadata doesn't install that tool. The repo otherwise uses standard Python dependencies via requirements.txt (pip). This mismatch suggests either broken install metadata or sloppy packaging; treat automatic install as risky.
!
Credentials
Registry declares a single required env var DINGTALK_WEBHOOK (primaryEnv) which matches the broadcast feature. However code (src/broadcast/dingtalk.py) also reads DINGTALK_SECRET for HMAC signing, but that variable is not declared in requires.env. Additional optional envs appear in config logic (DB_PATH, LOG_LEVEL) and .env references in docker-compose. The skill also includes scripts and docs that reference ~/.claude memory paths and require an external 'claude' CLI — those introduce implicit credentials/configuration and external network usage that are not declared.
!
Persistence & Privilege
The skill is not marked always:true (good). Nevertheless repository docs instruct agents to persist memory into a global ~/.claude/projects/ path and scripts/claude_loop.sh create .claude/out and .claude/logs inside repo and call the 'claude' binary. The combination of an LLM loop, automated write-to-home instructions, and webhook broadcasting increases blast radius if run autonomously. This behavior is not explained in the high-level SKILL.md and is outside the crawler's minimal needs.
What to consider before installing
This package appears to be a functioning crawler + DingTalk broadcaster, but I found several red flags you should address before installing or running it: - Install metadata mismatch: the skill requires a 'uv' CLI but the install block lists package 'curl_cffi' and claims it will create 'uv' — that is inconsistent. Do not run any automatic 'install' step until this is clarified. Prefer creating a Python venv and running 'pip install -r requirements.txt' yourself in an isolated environment. - Environment variables: you must supply DINGTALK_WEBHOOK for broadcasts; the code also reads DINGTALK_SECRET (for signed webhooks) but that is not declared. If you supply a secret, ensure it's the intended value. Review any .env files before use. - Hidden agent/LLM behavior: repository docs and scripts instruct running an LLM loop (claude_loop.sh), and to persist agent memory under ~/.claude/projects/... — these actions are unrelated to simple crawling and grant the project the ability to read/write outside the repo and to repeatedly invoke an LLM. Only run these parts if you trust the publisher and understand what will be written and sent. - Run in isolation: test in a disposable environment (container or VM), with network restricted if necessary. Inspect and, if needed, remove or disable scripts/claude_loop.sh and CLAUDE.md steps that write to your home directory before allowing autonomous runs. - Verify robots/ethics: review target sites' robots.txt and legal terms — the repository itself has conflicting notes about obeying robots.txt. If you want, I can list the exact files and lines that reference the problematic install block, DINGTALK_SECRET usage, and the ~/.claude memory writes so you can inspect them before proceeding.

Like a lobster shell, security has layers — review code before you run it.

latestvk97519rvdad0d91bhek9jmfhyn83apch

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🦞 Clawdis
OSmacOS · Linux
Any binuv
EnvDINGTALK_WEBHOOK
Primary envDINGTALK_WEBHOOK

Install

uv
Bins: uv
uv tool install curl_cffi

Comments