Back to skill
Skillv4.2.0
ClawScan security
Local Web Search · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 29, 2026, 4:11 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill largely matches a local web-search/browse/verification tool, but there are several mismatches and sloppy/deceptive details (missing declared envs, an odd hard-coded default local URL, incomplete requirements, and optional secret access paths) that raise concern and deserve clarification before use.
- Guidance
- This skill appears to implement local search, page fetching, and claim verification as described, but there are several inconsistencies and things to check before installing or enabling it: - Confirm the default LOCAL_SEARCH_URL: the skill defaults to http://192.168.2.169:8081. That is a specific local IP (likely the author's). Set LOCAL_SEARCH_URL to your own SearXNG instance or leave it unset to avoid accidentally sending queries to that host. - Environment variables are used but not declared: the registry metadata lists no required env vars even though the code reads LOCAL_SEARCH_URL, LOCAL_SEARCH_FALLBACK_URL, LOCAL_SEARCH_PROXY, BROWSER_WORKER_URL and proxy envs. Treat these as sensitive configuration and set them deliberately. - Optional Gemini mode and 1Password integration: Gemini helper will accept GEMINI_API_KEY/GOOGLE_API_KEY or will call the 'op' CLI to read secrets if you pass op-item args. Only enable/point to those secrets if you trust the code and the runtime environment. The 'op' usage executes the 1Password CLI locally and can return credentials — it is not automatic, but it is powerful when used. - Missing dependency/install clarity: requirements.txt only lists google-genai but the scripts recommend or rely on 'scrapling' and Playwright for full functionality. The install steps are not authoritative; you should manually review and install required packages (scrapling, playwright) in a controlled virtualenv before use. - Network behavior and local probes: scripts probe local ports to auto-detect proxies and may attempt to use curl or spawn subprocesses. This is expected for a web fetcher but does reveal local network configuration. If you run this in a sensitive environment, review that behavior and consider restricting network access or running in an isolated environment. - Fallback behavior and BROWSER_WORKER_URL: fallback to a public search provider is disabled by default but can be enabled via LOCAL_SEARCH_FALLBACK_URL. Do not set that unless you trust the fallback provider. The BROWSER_WORKER_URL delegates rendering to an external worker if configured — only set this to a sidecar you control/trust. What would change this assessment to 'benign': explicit, accurate registry metadata listing the optional env vars and required runtime binaries; a complete requirements/install manifest that includes scrapling and Playwright or a trusted install script; and removal or explanation of the hard-coded default local IP. If you plan to use this skill, run it in a sandbox/VM or review and run the scripts manually first, and never provide API keys or 1Password items unless you understand the invocation path.
Review Dimensions
- Purpose & Capability
- noteName/description align with the included scripts (search_local_web.py, browse_page.py, verify_claim.py, optional Gemini helper) — the code implements local SearXNG-based search, page fetching, and claim verification as advertised. However, there are oddities: the default LOCAL_SEARCH_URL is hard-coded to http://192.168.2.169:8081 (a specific private IP), and the registry metadata declares no required env vars while the code expects many optional environment variables (LOCAL_SEARCH_URL, LOCAL_SEARCH_FALLBACK_URL, LOCAL_SEARCH_PROXY, GEMINI_API_KEY/GOOGLE_API_KEY, BROWSER_WORKER_URL, GEMINI_OP_*). These discrepancies are unexpected and should be explained.
- Instruction Scope
- concernSKILL.md instructs the agent to run the included Python scripts and to fetch arbitrary result URLs. The scripts perform outbound HTTP GETs to any URL supplied (browse_page.py / verify_claim.py), probe local ports (auto-detect proxies), check for local Chrome paths, and may invoke external utilities (curl, 1Password CLI 'op'). While these actions are consistent with a web-browsing skill, they give the skill the ability to fetch arbitrary remote content and to interact with local system components and CLIs — all of which are sensitive and should be explicit in the metadata and install instructions. The SKILL.md does not fully enumerate these runtime capabilities in registry metadata.
- Install Mechanism
- noteThere is no install spec in the registry (instruction-only), which is lower risk, but the shipped scripts instruct users to install third-party Python packages and Playwright manually. requirements.txt only lists google-genai, yet the code strongly references 'scrapling' and optionally Playwright/Chromium. run_gemini_search.sh will create a venv and pip-install requirements.txt (which would not install scrapling). The absence of an authoritative install manifest for required runtime packages (scrapling, playwright) is sloppy and risks surprising a user when the scripts attempt to install/run components.
- Credentials
- concernRegistry metadata claims no required environment variables, but the code reads and relies on many env vars (LOCAL_SEARCH_URL, LOCAL_SEARCH_FALLBACK_URL, LOCAL_SEARCH_PROXY, HTTPS_PROXY/ALL_PROXY, BROWSER_WORKER_URL, GEMINI_API_KEY/GOOGLE_API_KEY, GEMINI_OP_*). Optional Gemini mode can consume API keys or use the 1Password CLI to read secrets. The presence of a 1Password helper that invokes the 'op' CLI means the skill can access vault secrets if configured — appropriate for optional Gemini mode but it must be explicit. Overall, the number and sensitivity of env/config paths used is larger than the declared metadata indicates.
- Persistence & Privilege
- okThe skill does not request always:true or claim any persistent elevated privileges. It does not modify other skills' configurations. It reads a local .project_root file for a path hint but does not write files by default. Autonomous invocation is allowed (platform default) but is not by itself a new risk here.
