Parallel Deep Research

v1.0.0

Deep multi-source research via Parallel API. Use when user explicitly asks for thorough research, comprehensive analysis, or investigation of a topic. For quick lookups or news, use parallel-search instead.

3· 3.9k·28 current·31 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for normallygaussian/parallel-deep-research.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Parallel Deep Research" (normallygaussian/parallel-deep-research) from ClawHub.
Skill page: https://clawhub.ai/normallygaussian/parallel-deep-research
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install parallel-deep-research

ClawHub CLI

Package manager switcher

npx clawhub@latest install parallel-deep-research
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's name and description (deep multi-source research via Parallel API) align with the instructions in SKILL.md. However, the registry metadata declares no required environment variables or credentials while the SKILL.md explicitly requires a PARALLEL_API_KEY. That metadata/manifest mismatch is an incoherence: a research skill that calls an external API should declare its primary credential in requires.env.
!
Instruction Scope
The runtime instructions tell the agent/user to set PARALLEL_API_KEY, install a CLI, run queries, save files under /tmp or user-specified paths, and spawn sub-sessions to read those files. Those actions are consistent with the stated purpose, but the instructions reference an environment variable and an installation step that are not declared in the skill metadata. No other unexpected file/system access is requested.
!
Install Mechanism
There is no formal install spec in the package metadata, but SKILL.md recommends running curl -fsSL https://parallel.ai/install.sh | bash. Piping a remote install script to shell is a high-risk pattern: it downloads and executes arbitrary code from the network. While the URL appears to be the vendor's domain, the skill should either declare an install spec or avoid instructing users/agents to run a shell install in-line.
!
Credentials
The only credential implied by the instructions is PARALLEL_API_KEY, which is appropriate for a third-party research API. However, the skill metadata declares no required env vars or primary credential — that omission is misleading and reduces transparency. The SKILL.md does not request unrelated credentials, so the requested secret would be proportional if properly declared.
Persistence & Privilege
The skill does not request 'always: true' and uses default agent invocation permissions. It does not attempt to modify other skills or global agent config in the instructions. Saving research outputs to files and using sessions_spawn is within scope for this kind of skill.
What to consider before installing
This skill appears to document a legitimate Parallel.ai research CLI, but the package metadata is incomplete and the README tells you to run a curl | bash installer. Before installing or giving this skill an API key: 1) Ask the publisher to declare PARALLEL_API_KEY (primaryEnv) in the registry metadata so the requirement is explicit. 2) Verify the installer URL (https://parallel.ai/install.sh) independently on the vendor website and prefer official release packages or package-manager installs over piping a script to shell. 3) Create a limited-scope API key for testing (do not reuse high-privilege keys). 4) Run installation in a sandbox or isolated environment first. 5) If you need stronger assurance, request an install spec and a verifiable checksum/signature for the installer or use the upstream docs (https://docs.parallel.ai) rather than the skill text. If you don't want the agent to install or run external code autonomously, avoid granting it the ability to run the install commands automatically.

Like a lobster shell, security has layers — review code before you run it.

latestvk979kq337gaqy872wf9z0691n980ewxj
3.9kdownloads
3stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Parallel Deep Research

Deep, multi-source research for complex topics requiring synthesis from many sources. Returns comprehensive reports with citations.

When to Use

Trigger this skill when the user asks for:

  • "deep research on...", "thorough investigation of...", "comprehensive report about..."
  • "research everything about...", "full analysis of..."
  • Complex topics requiring synthesis from 10+ sources
  • Competitive analysis, market research, due diligence
  • Questions where depth and accuracy matter more than speed

NOT for:

  • Quick lookups or simple questions (use parallel-search)
  • Current news or recent events (use parallel-search with --after-date)
  • Reading specific URLs (use parallel-extract)

Quick Start

parallel-cli research run "your research question" --processor pro-fast --json -o ./report

CLI Reference

Basic Usage

parallel-cli research run "<question>" [options]

Common Flags

FlagDescription
-p, --processor <tier>Processor tier (see table below)
--jsonOutput as JSON
-o, --output <path>Save results to file (creates .json and .md)
-f, --input-file <path>Read query from file (for long questions)
--timeout NMax wait time in seconds (default: 3600)
--no-waitReturn immediately, poll later with research status

Processor Tiers

ProcessorTimeUse Case
lite-fast10-20sQuick lookups
base-fast15-50sSimple questions
core-fast15s-100sModerate research
pro-fast30s-5minExploratory research (default)
ultra-fast1-10minMulti-source deep research
ultra2x-fast1-20minDifficult deep research
ultra4x-fast1-40minVery difficult research
ultra8x-fast1min-1hrMost challenging research

Non-fast variants (e.g., pro, ultra) take longer but use fresher data.

Examples

Basic research:

parallel-cli research run "What are the latest developments in quantum computing?" \
  --processor pro-fast \
  --json -o ./quantum-report

Deep competitive analysis:

parallel-cli research run "Compare Stripe, Square, and Adyen payment platforms: features, pricing, market position, and developer experience" \
  --processor ultra-fast \
  --json -o ./payments-analysis

Long research question from file:

# Create question file
cat > /tmp/research-question.txt << 'EOF'
Investigate the current state of AI regulation globally:
1. What regulations exist in the US, EU, and China?
2. What's pending or proposed?
3. How do companies like OpenAI, Google, and Anthropic respond?
4. What industry groups are lobbying for/against regulation?
EOF

parallel-cli research run -f /tmp/research-question.txt \
  --processor ultra-fast \
  --json -o ./ai-regulation-report

Non-blocking research:

# Start research without waiting
parallel-cli research run "research question" --no-wait

# Check status later
parallel-cli research status <task-id>

# Poll until complete
parallel-cli research poll <task-id> --json -o ./report

Best-Practice Prompting

Research Question

Write 2-5 sentences describing:

  • The specific question or topic
  • Scope boundaries (time period, geography, industries)
  • What aspects matter most (pricing? features? market share?)
  • Desired output format (comparison table, timeline, pros/cons)

Good:

Compare the top 5 CRM platforms for B2B SaaS companies with 50-200 employees.
Focus on: pricing per seat, integration ecosystem, reporting capabilities.
Include recent 2024-2026 changes and customer reviews from G2/Capterra.

Poor:

Tell me about CRMs

Response Format

Returns structured JSON with:

  • task_id — unique identifier for polling
  • statuspending, running, completed, failed
  • result — when complete:
    • summary — executive summary
    • findings[] — detailed findings with sources
    • sources[] — all referenced URLs with titles

Output Handling

When presenting research results:

  • Lead with the executive summary verbatim
  • Present key findings without paraphrasing
  • Include source URLs for all facts
  • Note any conflicting information between sources
  • Preserve all facts, names, numbers, dates, quotes

Running Out of Context?

For long conversations, save results and use sessions_spawn:

parallel-cli research run "<question>" --json -o /tmp/research-<topic>

Then spawn a sub-agent:

{
  "tool": "sessions_spawn",
  "task": "Read /tmp/research-<topic>.json and present the executive summary and key findings with sources.",
  "label": "research-summary"
}

Error Handling

Exit CodeMeaning
0Success
1Unexpected error (network, parse)
2Invalid arguments
3API error (non-2xx)

Prerequisites

  1. Get an API key at parallel.ai
  2. Install the CLI:
curl -fsSL https://parallel.ai/install.sh | bash
export PARALLEL_API_KEY=your-key

References

Comments

Loading comments...