Curated Search

v1.0.7

Domain-restricted full-text search over curated technical documentation

0· 786·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (domain-restricted curated search) lines up with the included files (crawler, indexer, content-extractor, search CLI) and the declared runtime requirement (node). There are no unexpected credentials, unusual binaries, or install steps in the bundle. The domain whitelist and seeds in config.yaml explain why a crawler is included.
Instruction Scope
SKILL.md and README consistently state that search operations read local index files (data/index) and do not make network calls; that is reflected by a search CLI script. The crawler (scripts/crawl.js + src/crawler.js) does perform outbound HTTP requests, but those are described as user-initiated (npm run crawl) and controlled by config.yaml (whitelist, delays, robots.txt). One minor mismatch: SKILL.md emphasizes 'only when the user explicitly calls it' for invocation, but platform metadata allows normal autonomous invocation by agents (this is the platform default). This is an informational mismatch in documentation rather than a code contradiction, but you should be aware agents could be permitted to call the tool unless you restrict it.
Install Mechanism
No install spec is present (instruction-only install), and the bundle contains source JS files and scripts that run under Node. There are no downloads from external or untrusted URLs, no archive extraction steps, and no package manager installs invoked automatically by the skill. Requiring 'node' is proportionate.
Credentials
The skill requests no environment variables, no credentials, and reads only local configuration (config.yaml) and index files under its data/ path. The config contains domain whitelists and seeds (expected). No secret exfiltration indicators or unnecessary credential requirements are present in the manifest or docs.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges in the metadata. The documentation describes running the crawler periodically (cron/systemd) as an operator action; that implies operational choices but not an inherent persist/privilege escalation in the skill itself. It does not modify other skills' configs in the provided materials.
Assessment
This skill is internally consistent: the search tool reads a local MiniSearch index and requires only Node; network activity occurs only if you (or an operator) run the crawler (npm run crawl) which will fetch pages from the domains listed in config.yaml. Before installing or enabling scheduled crawls: 1) review and tune config.yaml domains/seeds and delays (to avoid hammering sites or crawling sensitive hosts); 2) run the crawler as a low-privileged user and keep it isolated (systemd/cron guidance in docs is helpful); 3) verify the package does not contain legacy network servers (the repo includes SECURITY_INCIDENT_2026-02-14.md documenting a previously removed server component — the current bundle claims that file was removed, but you may want to inspect the published archive yourself); 4) if you do not want agents to call the tool autonomously, adjust OpenClaw skill invocation settings (or disable model invocation for this skill). Overall the bundle looks coherent and proportionate, but exercise normal operational caution when enabling networked crawling.

Like a lobster shell, security has layers — review code before you run it.

latestvk971typ81vme8xy55f2ky7dc9n81c27g

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis
Binsnode

Comments