Curated Search
PassAudited by ClawScan on May 10, 2026.
Overview
Curated Search appears to be a transparent local documentation-search skill; its notable risks are standard, disclosed setup and crawling steps that users control.
Before installing, review the npm dependencies and config.yaml whitelist. Run `npm run crawl` only when you are comfortable contacting the listed sites, and do not enable the cron/systemd examples unless you want scheduled background re-crawls.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If you run the crawler, it will contact the configured public documentation sites and create a local search index.
The skill can access external websites during crawling, but the behavior is disclosed, tied to its search-index purpose, and described as user initiated and whitelist scoped.
The crawler optionally makes outbound HTTP requests during index builds (one‑time setup), but those are user‑initiated (`npm run crawl`) and respect the configured domain whitelist.
Review config.yaml domains, seeds, depth, delay, and max_documents before running `npm run crawl`.
Installing dependencies may fetch third-party packages needed by the crawler/search implementation.
The documented setup requires installing Node dependencies, which is expected for this skill but makes npm package provenance part of the user’s trust decision.
# 1. Install dependencies npm install
Prefer `npm ci` with the included package-lock.json, and review dependencies if you have strict supply-chain requirements.
Running the health-check script executes local project code to verify the search index.
The static scan shows the health-check helper spawning a local Node process. That is code execution, but it appears to be for running the skill’s own local health check rather than hidden execution.
const result = spawnSync(process.execPath || 'node', [
Run optional helper scripts only after reviewing the package, and do not grant extra privileges unnecessarily.
An agent may be able to call the local search tool when it deems it useful, even if the documentation frames use as explicit user invocation.
This is stronger wording than the registry default shown in the supplied metadata, where model invocation is not disabled. The practical risk is limited because search is local, but users should understand the platform may allow agent-initiated tool calls.
The `curated-search.search` tool is invoked **only when the user explicitly calls it**. It does not run autonomously.
If you require manual-only use, configure OpenClaw to disable autonomous model invocation for this skill if supported.
If you enable the cron or systemd examples, the crawler will run periodically and update local index data.
The deployment guide documents an optional cron job for periodic crawling. This is user-directed persistence, not an automatic install-time background agent.
0 2 * * 0 cd /home/q/.openclaw/workspace/skills/curated-search && /usr/bin/npm run crawl
Only enable scheduled crawling if you want background updates, and run it under a least-privilege user with resource limits.
