Web Search Local
v0.1.1Local web search without API Key. Supports Bing, DuckDuckGo, Yandex multi-engine search with built-in cache and automatic failover. Use when users need to se...
⭐ 0· 133·0 current·0 all-time
by@lnblxj
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the implementation: the code scrapes search engines, supports engine selection/auto-failover, caching, delays and proxy options. There are no extraneous credential or system access requests that don't belong to a web-search tool.
Instruction Scope
SKILL.md and code consistently instruct the agent to run the provided CLI script and document its flags. Instructions and code operate on network requests and a local cache directory (~/.cache/web-search-local) only; they do not attempt to read unrelated system files or external secrets.
Install Mechanism
There is no declared install spec in the registry (instruction-only), but the README and docs state runtime dependencies (requests, beautifulsoup4). That mismatch (no automated install step) is not malicious but worth noting: the agent or user must install dependencies manually.
Credentials
The skill requests no environment variables, no credentials, and only writes to its own cache directory in the user's home. Proxy configuration is supported via CLI/API functions; that is appropriate for a network client. No unrelated secrets (AWS keys, tokens) are required.
Persistence & Privilege
Skill does not request persistent 'always' inclusion, does not modify other skills, and only stores its own cache files under ~/.cache/web-search-local. It does not alter system-wide agent configuration or other skill settings.
Assessment
This skill appears coherent for local web scraping search. Before installing: (1) be aware it performs live HTTP(S) requests to public search engines (Bing, DDG, Yandex) and may trigger CAPTCHAs or be blocked in some networks; (2) it stores cached search results under ~/.cache/web-search-local (inspect or clear with --cache-clear if you care about local storage); (3) install the documented Python dependencies (requests, beautifulsoup4) manually or in a virtualenv; (4) verbose/debug logs may include queries and timing info—avoid using with sensitive queries if you don't want them logged; (5) if you require stronger isolation, run the script in a sandboxed container or VM. If you want greater assurance, review the full scripts/search.py file for any additional network calls or behavior not covered in the docs (the provided file looks consistent with the documented functionality).Like a lobster shell, security has layers — review code before you run it.
latestvk97bc79a8f0gprtt8fncwwtkzn837wqw
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
