Back to skill
Skillv1.3.0
ClawScan security
Web Search · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 10, 2026, 2:03 AM
- Verdict
- benign
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's code, dependencies, and runtime instructions match its stated purpose (multi-engine web search via scraping and Playwright); no obvious incoherent requests for unrelated credentials or system access were found, but there are supply-chain and network-exposure considerations to review before installation.
- Guidance
- This skill appears to be what it claims (a scraper-based web search), but before installing: 1) review the full web_search.py source (the provided file was partially truncated here) to ensure there are no hidden network callbacks or unexpected data exfiltration; 2) evaluate the trustworthiness of third-party dependencies (crawl4ai, baidusearch, uv) and consider pinning versions or auditing those packages; 3) run the skill in an isolated/sandboxed environment because it will fetch arbitrary URLs and Playwright will download/execute a browser; 4) avoid giving it sensitive internal URLs to crawl (SSRF / internal-data exposure risk); and 5) if you need stronger guarantees, request the upstream homepage/repo or a signed release so you can verify provenance.
Review Dimensions
- Purpose & Capability
- okName/description (multi-engine web search without API keys) aligns with included code and declared dependencies (baidusearch, Playwright, crawl4ai, requests). The script implements search and crawl functions and uses the listed libraries.
- Instruction Scope
- noteSKILL.md instructs the agent to run scripts/web_search.py (entry point: main) to perform 'search', 'deep_search', and 'crawl' actions. Those actions necessarily fetch arbitrary web pages and may run browser automation (Playwright). The instructions do not request unrelated files or env vars, but 'crawl' allows fetching arbitrary URLs (risk of accessing internal resources/SSRF) and 'deep_search' may fetch and extract page contents. This is expected for a crawler/search skill but increases attack surface and data exposure.
- Install Mechanism
- noteNo explicit install spec in registry (instruction-only), but the package includes requirements.txt and Python code. Dependencies are standard PyPI packages. Playwright will download a Chromium browser on first run (~100MB) as noted — expected but a side-effect. Using third-party packages (crawl4ai, baidusearch, uv) is normal but introduces supply-chain risk; the setup.py references a GitHub URL but README.md referenced in setup.py is not present in the manifest (minor inconsistency).
- Credentials
- okThe skill does not request any environment variables, primary credentials, or config paths. The code shown does not access local secrets or unusual system configuration. Required libraries are consistent with web scraping/search functionality.
- Persistence & Privilege
- okFlags show no always:true or other elevated persistence. The skill is user-invocable and can be invoked autonomously (platform default) but does not request permanent presence or modify other skills' configurations.
