Back to skill
Skillv1.0.0
ClawScan security
Google Reviews Pain Detector · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousMar 5, 2026, 8:58 PM
- Verdict
- suspicious
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill mostly does what it claims (scraping reviews and scoring leads) but contains hardcoded local paths, implicit dependencies, and hidden file-modification behavior that do not match the declared requirements and merit caution.
- Guidance
- This skill appears to implement the advertised scraping and pain-word detection, but it is not self-contained: it expects a local 'scrapling' virtualenv at a hardcoded absolute path and accesses a hardcoded MASTER_LEAD_LIST file in /Users/wlc-studio/. Before installing or running: (1) Verify the scrapling dependency exists and inspect that code (StealthyFetcher can run a headless browser and bypass protections). (2) Back up the MASTER_LEAD_LIST file and confirm you want this skill to read/modify it; the metadata does not declare this required path. (3) Review the rest of detector.py (saving logic) for any network callbacks or hidden uploads (the provided file was truncated). (4) Consider running the script in a sandboxed or isolated environment to avoid large-scale scraping or unintended modifications. (5) Be aware scraping Google/Maps/Yelp may violate terms of service or trigger rate-limiting; if you only need one-off lookups, run per-business rather than --list to reduce blast radius.
Review Dimensions
- Purpose & Capability
- noteName/description (detect pain words in reviews) match the code's behavior (scrapes Google/Yelp/Maps and searches review sites). However the script depends on a local 'scrapling' virtualenv at an absolute user path and reads/writes a master lead list at a hardcoded path (/Users/wlc-studio/...). Those local path dependencies are not declared in the skill metadata (requires.env/config paths = none) and are specific to the pack author's environment.
- Instruction Scope
- concernSKILL.md instructs scanning single businesses or the full master list and saving HOT leads. The runtime instructions + code will perform headless stealth scraping of Google Search and Google Maps (via StealthyFetcher with cloudflare solving), follow external review links, and (per SKILL.md/flags) can append back to the MASTER_LEAD_LIST. That gives the skill the ability to perform bulk scraping and to read/modify a local lead file — behavior not emphasized in the metadata and potentially unexpected by users.
- Install Mechanism
- noteNo install spec (instruction-only) which reduces installer risk. But the script forcibly boots a site-packages path for a local virtualenv (/Users/wlc-studio/…/scrapling/.venv/lib/python3.14/site-packages) and will exit if it can't import Scrapling. The lack of an explicit dependency declaration or a standard install step is a usability and coherence issue (the skill relies on an out-of-band local package).
- Credentials
- concernMetadata declares no required env vars or config paths, yet the code uses hardcoded filesystem locations (SCRAPLING_VENV_SITE and MASTER_LEAD_LIST) and will read and (per SKILL.md) append to that master lead list. This is a mismatch: the skill silently requires access to specific local files and directories which grant access to potentially sensitive lead data.
- Persistence & Privilege
- okThe skill does not request always:true and is user-invocable only. It does not attempt to modify other skills or system-wide configurations in the visible code. Its principal effect on the system is reading/parsing and potentially appending to the MASTER_LEAD_LIST file.
