Parallel Extract
PassAudited by ClawScan on May 7, 2026.
Overview
This is a transparent URL-extraction helper, with the main things to notice being its use of an authenticated Parallel CLI and optional sharing of saved extracts with a sub-agent.
This skill appears coherent and purpose-aligned. Before installing or using it, make sure you trust and have correctly installed the official Parallel CLI, understand that requested URLs/content may be processed by Parallel under your authenticated account, and avoid using the optional `/tmp` plus sub-agent workflow for sensitive or private pages unless you intend that sharing.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill will not work unless a separate CLI is installed, and that CLI's behavior is outside this skill artifact.
The skill depends on an external CLI that is not included in the artifact set; this is expected for the purpose, but users should install it only from trusted Parallel documentation.
Requires `parallel-cli` (installed and authenticated). If `parallel-cli --version` fails... tell the user to see https://docs.parallel.ai/integrations/cli and stop.
Install `parallel-cli` only from the official Parallel documentation and keep it updated.
URL extraction requests may use your Parallel account, quota, billing, or access permissions.
The skill uses the user's authenticated Parallel CLI session; this is purpose-aligned for API access, but it means actions run under the user's Parallel account.
Requires `parallel-cli` (installed and authenticated).
Use an account or API credential appropriate for this purpose, and do not authenticate with broader access than needed.
Extracted page content saved to `/tmp` may be read by another agent session and could remain on disk temporarily.
The skill suggests passing extracted content through a temporary file to a spawned sub-agent; this is disclosed and task-scoped, but it broadens where extracted content is reused.
"tool": "sessions_spawn", "task": "Read /tmp/extract-<topic>.json and summarize the key content."
Use this workflow only for non-sensitive extracts, choose specific filenames, and remove temporary files when no longer needed.
