Install
openclaw skills install parallel-ai-searchUse Parallel's parallel-cli to do live web search, URL extraction (clean markdown), deep research reports, bulk data enrichment (CSV/JSON), FindAll entity di...
openclaw skills install parallel-ai-searchThis is a single “master” skill that replaces the earlier Node-script-based version of parallel-ai-search.
It routes to the right parallel-cli capability for the task:
parallel-cli search)parallel-cli extract)parallel-cli research ...)parallel-cli enrich ...)parallel-cli findall ...)parallel-cli monitor ...)Choose the smallest / cheapest action that solves the user’s request:
Optional manual prefixes if the user invoked this skill directly:
search: ...extract: ...research: ...enrich: ...findall: ...monitor: ...If a prefix is present, honour it.
Before running any Parallel command, ensure auth works:
parallel-cli auth
If parallel-cli is missing, install it:
curl -fsSL https://parallel.ai/install.sh | bash
If you cannot use the install script, use pipx:
pipx install "parallel-web-tools[cli]"
pipx ensurepath
Then authenticate (choose one):
# Interactive OAuth (opens browser)
parallel-cli login
# Headless / SSH / CI
parallel-cli login --device
# Or environment variable
export PARALLEL_API_KEY="your_api_key"
[Source Title](https://...)./tmp/ and summarise in-chat.Use Search for fast, cost-effective answers with citations.
parallel-cli search "$OBJECTIVE" --mode agentic --max-results 10 --json
Add any of these only when relevant:
--after-date YYYY-MM-DD (freshness constraint)--include-domains a.com b.org (restrict sources)--exclude-domains spam.com (block sources)-q "keyword query" flags (extra keyword probes)-o "/tmp/$SLUG.search.json" (save full JSON to a file)From the JSON results, extract title, url, and any publish_date / excerpt fields. Answer the user’s question, and cite each claim inline.
Use Extract when you need the actual contents of specific URLs (webpages, PDFs, JS-heavy sites).
parallel-cli extract "$URL" --json
Add when relevant:
--objective "Focus area" (e.g., pricing, API usage, constraints)--full-content (only if the user needs the whole page)--no-excerpts (if you only want full content)-o "/tmp/$SLUG.extract.json" (save full JSON to a file)Deep research is slower and may cost more than Search. Use it only when the user explicitly wants depth.
parallel-cli research run "$QUESTION" --processor pro-fast --no-wait --json
Parse run_id (and any monitoring URL) from JSON and tell the user the run started.
Choose a short slug filename (lowercase-hyphen), then:
parallel-cli research poll "$RUN_ID" -o "/tmp/$SLUG" --timeout 540
/tmp/$SLUG.md/tmp/$SLUG.jsonIf polling times out, re-run the same poll command — the run continues server-side.
Use Enrich to add web-sourced columns to structured data.
parallel-cli enrich suggest "$INTENT" --json
Use this when the user knows the goal but not the exact output schema.
For CSV:
parallel-cli enrich run --source-type csv --source "input.csv" --target "/tmp/enriched.csv" --source-columns '[{"name":"company","description":"Company name"}]' --intent "$INTENT" --no-wait --json
For inline JSON rows:
parallel-cli enrich run --data '[{"company":"Google"},{"company":"Apple"}]' --target "/tmp/enriched.csv" --intent "$INTENT" --no-wait --json
Parse taskgroup_id from JSON.
parallel-cli enrich poll "$TASKGROUP_ID" --timeout 540 --json
After completion:
--target you chose).If poll times out, re-run it — the job continues server-side.
Use FindAll when the user wants you to discover a set of entities (e.g., “AI startups in healthcare”, “roofing companies in Charlotte”, “YC devtools companies”).
parallel-cli findall run "$OBJECTIVE" --generator core --match-limit 25 --no-wait --json
Useful options:
--dry-run --json to preview schema before spending money--exclude '[{"name":"Example Corp","url":"example.com"}]' to avoid known entities--generator preview|base|core|pro (core default; pro for hardest queries)Parse run_id from JSON.
parallel-cli findall poll "$RUN_ID" --json
parallel-cli findall result "$RUN_ID" --json
Respond with:
Use Monitor when the user wants ongoing tracking.
Create:
parallel-cli monitor create "$OBJECTIVE" --cadence daily --json
Optional:
--cadence hourly|daily|weekly|every_two_weeks--webhook https://example.com/hook (deliver events externally)--output-schema '<JSON schema string>' (structured events)Manage:
parallel-cli monitor list --json
parallel-cli monitor get "$MONITOR_ID" --json
parallel-cli monitor update "$MONITOR_ID" --cadence weekly --json
parallel-cli monitor delete "$MONITOR_ID"
parallel-cli monitor events "$MONITOR_ID" --json
parallel-cli monitor simulate "$MONITOR_ID" --json
Respond with the monitor id and how to retrieve events (or confirm webhook delivery).
references/command-templates.mdreferences/troubleshooting.md