Zoomin Docs Portal Scraper Tool
Analysis
The skill appears to be a straightforward documentation scraper, with expected but noticeable risks from manual Playwright installation, headless browser browsing, and local file summarization helpers.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
pip install playwright playwright install chromium
The skill asks the user to install an external Python package and Chromium browser binaries manually, without version pinning or an install spec.
all_urls = [line.strip() for line in f if line.strip()] ... page.goto(url, wait_until="domcontentloaded", timeout=30000)
The script launches a browser and visits each URL from the user-supplied file, with no host allowlist. This is expected for a scraper but should be used only with intended URLs.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
content = f.read() ... summary = content[:500].strip() + "..." ... print(json.dumps(results)) # Print to stdout for OpenClaw to capture
An included helper can read file paths passed to it and print summaries into agent-visible output. It is not shown running automatically, but it can expose local file contents if used on the wrong files.
