Back to skill
Skillv1.0.1

ClawScan security

Price Monitor · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignApr 30, 2026, 6:17 AM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's requirements and instructions are consistent with a price‑monitoring scraper: it only needs a browser capability and local file storage and does not request unrelated secrets or install code, though it asks the agent to perform network scraping and to optionally set up scheduled jobs and notifications.
Guidance
This skill is coherent for price tracking, but review a few points before installing: - The agent will perform network requests to every URL you ask it to track. Only give it product links you trust and expect it to visit; avoid giving internal or sensitive URLs. - If you allow the agent to set up scheduled scans (cron), it will create recurring network activity on your behalf—confirm you want that automation. - Alerts may be sent via the agent’s messaging/session facilities (e.g., Telegram/Discord channels); verify what messaging permissions the agent uses on your workspace so alerts aren’t routed to unintended recipients. - Scraping can trigger CAPTCHAs, IP blocks, or violate a platform's terms of service; consider using official APIs where possible (the docs note Amazon’s Product Advertising API as an alternative). - If you plan to connect Google Sheets, SMTP, or other services, provide only dedicated credentials with minimal scope. If you want, I can point out exact lines in SKILL.md or references that you might edit or remove to reduce persistence or messaging behavior before installing.

Review Dimensions

Purpose & Capability
okName/description match the requested actions: the SKILL.md and references describe scraping product pages, extracting prices, storing history, generating reports, and sending alerts. Declared runtime need (python3 in metadata) and the included lightweight helper scripts align with this purpose.
Instruction Scope
noteInstructions explicitly tell the agent to visit arbitrary product URLs, use a web browser tool, extract structured and DOM data, append logs to scripts/price_history.jsonl, and set up alerts. This is expected for a scraper, but it means the agent will fetch any URL the user supplies (including non-ecommerce URLs if given). It also instructs use of platform-specific anti-scrape workarounds and to notify via Telegram sessions_send — these are within scope but grant the agent network access and the ability to send messages using the agent’s session. Be aware this will perform network requests and may hit CAPTCHAs/blocks.
Install Mechanism
okThere is no install spec and the shipped code files are small help/CLI stubs only; nothing is downloaded or written during install. This is the lowest-risk install profile.
Credentials
noteThe skill declares no required environment variables or external credentials, which is proportional. The docs mention optional integrations (Google Sheets, SMTP, Amazon Product Advertising API) that would require credentials if the user connects them, but these are not required by default. The claim 'Telegram (default) ... No extra setup needed' may be misleading depending on the agent/platform capabilities — the agent may use its messaging/session facilities, not raw Telegram API keys.
Persistence & Privilege
noteThe skill does not request always:true and is user-invocable only. However, it includes explicit instructions for setting up scheduled scans (cron via openclaw cron add or user crontab), which, if executed, create persistent recurring activity. This persistence is reasonable for scheduled monitoring but is a capability the user should consciously approve before the agent configures it.