Back to skill
v1.0.2

Zoomin Docs Portal Scraper Tool

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 5:47 AM.

Analysis

The skill appears to be a straightforward documentation scraper, with expected but noticeable risks from manual Playwright installation, headless browser browsing, and local file summarization helpers.

GuidanceThis skill is reasonable to install if you need Playwright-based scraping of dynamic documentation pages. Install it in a dedicated virtual environment, review the URL list before running, choose a safe output directory, and avoid using the included analyzer on files that may contain private information.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Agentic Supply Chain Vulnerabilities
SeverityLowConfidenceHighStatusNote
SKILL.md
pip install playwright
playwright install chromium

The skill asks the user to install an external Python package and Chromium browser binaries manually, without version pinning or an install spec.

User impactThe scraper depends on external package and browser downloads, so installation should be done in a trusted virtual environment.
RecommendationUse a dedicated virtual environment, consider pinning Playwright versions, and install from trusted package sources.
Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
scripts/scrape_zoomin.py
all_urls = [line.strip() for line in f if line.strip()]
...
page.goto(url, wait_until="domcontentloaded", timeout=30000)

The script launches a browser and visits each URL from the user-supplied file, with no host allowlist. This is expected for a scraper but should be used only with intended URLs.

User impactIf the URL list contains unintended or untrusted targets, the skill will still attempt to browse them from the user's environment.
RecommendationReview the URL file before running and limit it to documentation pages you are authorized to scrape.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Memory and Context Poisoning
SeverityLowConfidenceMediumStatusNote
scripts/analyze_docs_batch.py
content = f.read()
...
summary = content[:500].strip() + "..."
...
print(json.dumps(results)) # Print to stdout for OpenClaw to capture

An included helper can read file paths passed to it and print summaries into agent-visible output. It is not shown running automatically, but it can expose local file contents if used on the wrong files.

User impactIf invoked on sensitive local files, snippets of those files could be placed into the agent conversation or logs.
RecommendationUse the analyzer only on intended scraped documentation files and avoid passing files that contain secrets or private data.