SearXNG-lite

v1.0.0

Multi-engine web search aggregation via local Python script. Use when: (1) searching the web for information, articles, documentation, (2) searching code rep...

0· 68·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description promise a local Python-based multi-engine search aggregator. The package contains a single Python script that implements scraping/parsing logic for many public search engines and a config.yml to toggle engines and proxy — these are coherent and expected for the stated purpose.
Instruction Scope
SKILL.md instructs running the included script and editing config.yml. The runtime instructions and the script operate only on the skill directory (config.yml) and make outbound HTTP(S) requests to public search engines (and optionally use HTTPS_PROXY). The instructions do not ask the agent to read unrelated files or exfiltrate data to unknown endpoints.
Install Mechanism
No install spec is provided; this is instruction-only plus an included script. Dependencies (httpx, lxml, optional pyyaml/socksio) are typical, installed via pip by the user. There are no downloads from untrusted URLs or archive extraction steps in the manifest.
Credentials
The skill declares no required environment variables or credentials. It optionally respects standard proxy env vars (HTTPS_PROXY) which is justified by the need to access certain engines. No secret tokens, keys, or unrelated service credentials are requested.
Persistence & Privilege
The skill does not request always:true, does not modify agent/system configuration, and is user-invocable only. It runs only when invoked and does not persist elevated privileges.
Assessment
This skill appears to do what it says: run a local Python script that scrapes multiple public search engines. Before installing, consider: (1) review the included scripts yourself (they are bundled) and run them in an isolated environment if you are concerned about network activity; (2) you will need to pip-install httpx and lxml and give the script network access — it will make many outbound HTTP requests to public search engines; (3) some engines require a proxy (config.yml or HTTPS_PROXY) and scraping may trigger CAPTCHAs or rate limits; (4) avoid searching sensitive or private data because results and queries travel over the network to third-party sites; (5) if you intend to use this in an automated/always-on agent, be aware it can make arbitrary outbound requests when invoked. If you want extra caution, run it in a container or restricted network namespace and inspect logs/output during initial runs.

Like a lobster shell, security has layers — review code before you run it.

latestvk973243d8xr5c4h26g089pqqcs83jktc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments