Back to skill
Skillv1.1.0
ClawScan security
Competitor Radar · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
ReviewMar 15, 2026, 1:50 AM
- Verdict
- Review
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's code and runtime instructions mostly fit its stated purpose, but there are several mismatches and privacy/exfiltration risks you should understand before installing.
- Guidance
- This skill largely does what it says, but review these before installing: - Required secrets: deliver.py expects Slack/Telegram/Discord/Twilio tokens and optionally GITHUB_TOKEN; the registry metadata incorrectly lists no env vars. Only provide tokens with scopes you trust (e.g., Slack bot tokens limited to a single channel if possible). - Data retention: the skill stores HTML/text snapshots and digests under its data/ directory and does not delete them. If you track companies with private pages, be cautious. - LLM prompts: diff.py embeds raw diff previews into 'llm_prompt' text intended for the agent/Claude. That means scraped page content could be sent to an external model — review whether you’re comfortable sending that data. - Cron and scraping: the skill sets up scheduled runs (weekly and daily alerts) which will perform network requests automatically. Confirm you want automated scraping of the listed targets and that you comply with target sites' policies. - License/activation: the skill enforces a paid tier via scripts/license.py and directs users to a Gumroad link. Inspect scripts/license.py to see how license validation is performed and whether any token/key is transmitted externally. If you decide to proceed: run the install steps manually in a constrained environment, inspect scripts/license.py and scrape.py for external endpoints, provide minimal-scoped delivery tokens, and consider running the skill in an isolated user account or container so its persistent snapshots remain separated from other data.
Review Dimensions
- Purpose & Capability
- okName/description (competitor tracking, diffing, and delivering digests) match the included scripts (scrape, diff, jobs, github_tracker, digest_builder, deliver, alert). Required binaries (python3, curl, jq) are plausible for the stated work; Playwright is referenced in README/requirements for JS-heavy scraping and is a reasonable dependency for a scraper.
- Instruction Scope
- concernSKILL.md tells the agent to crawl pages, capture/retain HTML/text snapshots under the skill's data/ directory, run diffs that embed raw diff previews in LLM prompts, and create cron jobs for scheduled runs. Those behaviors are expected for this tool, but the instructions and code will store potentially sensitive scraped content indefinitely and (via generated prompts) encourage sending raw snapshots to an external LLM (OpenClaw/Claude) for interpretation — a privacy/exfiltration surface that is not highlighted in the meta fields.
- Install Mechanism
- noteThis is an instruction-only skill (no install spec in registry) but includes a requirements.txt and README that ask the user to run 'pip install -r requirements.txt' and 'playwright install chromium'. That places install responsibility on the user; no archive downloads or obscure URLs are used inside the manifest. Risk level is moderate because Playwright and Python packages will be installed manually by the user.
- Credentials
- concernRegistry metadata declares no required env vars, but the code/README clearly expect multiple credentials (SLACK_BOT_TOKEN or SLACK_WEBHOOK_URL, TELEGRAM_BOT_TOKEN/CHAT_ID, DISCORD_WEBHOOK_URL, TWILIO_ACCOUNT_SID/TWILIO_AUTH_TOKEN, optional GITHUB_TOKEN). These are directly related to delivery and GitHub rate limits, so their presence is understandable — but the mismatch (metadata says none) is an incoherence. Also note that diffs produce LLM prompts including raw HTML/text; if your agent sends those prompts to an external model, scraped content (possibly proprietary) will be transmitted.
- Persistence & Privilege
- noteThe skill writes persistent state and snapshots into its own data/ directory and creates cron jobs via the OpenClaw cron system. It does not set 'always: true' or request system-wide config edits beyond its own directory. Persistent storage of all snapshots indefinitely is a privacy/retention concern but consistent with the tool's purpose.
