Back to skill
Skillv1.0.9

ClawScan security

test_skill · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignMar 11, 2026, 10:42 AM
Verdict
benign
Confidence
medium
Model
gpt-5-mini
Summary
The skill's files, install/run instructions, and requested resources are consistent with a web crawler for BBC and other news sites; nothing in the package indicates covert data exfiltration or unrelated privilege requests, though you should review third-party dependencies and run installs in an isolated environment.
Guidance
This package appears to be a coherent web crawler. Before installing or running it: 1) run pip installs and Playwright browser installs in a virtualenv or sandbox (not as root) to avoid system package conflicts; 2) review the requirements (especially 'crawl4ai') and verify their provenance and any credentials they might require; 3) be mindful of legal/ethical rules: respect robots.txt and site terms, and avoid aggressive crawling—use delays and domain restrictions; 4) if you need higher assurance, inspect the full universal_crawler_v2.py (the provided file was truncated) and run the code in an isolated network environment to observe outbound connections made by dependencies.
Findings
[NO_FINDINGS] expected: Static regex scan found no flagged patterns. That aligns with the package being a normal crawler; absence of findings does not guarantee safety—third-party packages and dynamic behavior (Playwright downloads) should still be reviewed.

Review Dimensions

Purpose & Capability
okName/description (BBC-focused universal crawler with anti-scraping fallbacks) match the included code and scripts: a multi-method crawler (crawl4ai, playwright, requests), deduping, image download, and Markdown output. Minor inconsistencies (README mentions Python 3.8+, SKILL.md says 3.9+) do not change purpose.
Instruction Scope
okSKILL.md instructs only to install Python deps and run the crawler with CLI flags. It does not instruct reading unrelated local files or environment secrets, nor does it send collected data to unexpected endpoints (the code crawls target sites and writes local files). The crawler will perform network requests to target websites as expected.
Install Mechanism
noteNo platform install spec declared in registry, but repository includes install.py / install_dependencies.sh that run pip install -r requirements.txt and run 'python -m playwright install chromium'. Dependencies are fetched via pip and Playwright's browser install (standard mechanisms). Note: crawl4ai is a third‑party package (no pinned source) and Playwright will download browser binaries from the web—recommend verifying packages and running installs in an isolated environment.
Credentials
okThe skill declares no required environment variables, credentials, or config paths. Code does not read secrets or request unrelated credentials. Dependencies may later require credentials (e.g., if some optional third-party services are used), so check upstream package docs.
Persistence & Privilege
okSkill is not always-enabled and does not request elevated platform privileges. It writes lock files and output data under its working directory only. No modifications to other skills or global agent settings are present.