Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Research

v1.0.0

Conduct open-ended research on a topic, building a living markdown document. Supports interactive and deep research modes.

0· 883·8 current·10 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to run deep async research via a 'parallel-research' CLI and to export PDFs via an 'export-pdf' script. However, the package contains only docs (OPENCLAW.md, SETUP.md, SKILL.md) and no scripts or binaries. The instructions expect files under ~/.openclaw/skills/research/scripts/, but those scripts are not present—this mismatch suggests either missing artifacts or sloppy packaging.
!
Instruction Scope
Runtime instructions tell the agent to create files under ~/.openclaw/workspace/research and to schedule cron jobs that deliver results back to a source channel. They also instruct how to store and expose PARALLEL_API_KEY via a local .env and by appending an export to shell profile. The doc assumes an env var exists and that cron jobs can deliver messages to external channels—these behaviors are plausible for a research skill but the instructions reference environment/config that were not declared and that enable external delivery of results, which increases the risk surface.
!
Install Mechanism
There is no formal install spec, but SETUP.md recommends symlinking local scripts (which are absent) and running a remote installer via curl (curl -LsSf https://astral.sh/uv/install.sh | sh). Advising an unattended remote install script is high-risk. Also recommending global symlinks (/usr/local/bin) and modifying ~/.bashrc are persistent, privileged operations. The absence of included scripts plus a remote install command is a problematic combination.
!
Credentials
The skill metadata declares no required env vars, yet the docs repeatedly rely on PARALLEL_API_KEY being present and instruct creating ~/.secrets/parallel_ai/.env and exporting it into the shell profile. Asking users to store an API key in plain text and append it to shell startup without declaring the key in the skill manifest is disproportionate and inconsistent. The requested secret itself (Parallel AI key) is plausible for the described deep-research capability, but the handling guidance is insecure and undocumented in the manifest.
Persistence & Privilege
The skill does not request always:true and doesn't claim extra platform privileges. However, SETUP.md instructs persistent changes (symlinks into user PATH, appending exports to ~/.bashrc, creating ~/.secrets, and scheduling cron jobs). Those are normal for CLI tooling but are materially persistent and should be performed only after verifying the scripts being linked and installed.
What to consider before installing
This skill's docs expect a 'parallel-research' CLI and 'export-pdf' script plus a PARALLEL_API_KEY, but those scripts and the env var are not declared or bundled. Before installing or following SETUP.md: 1) Ask the author for the missing scripts (or a trusted release URL) and for a homepage/repo so you can review their code. 2) Do not blindly run the curl | sh installer (astral.sh) — audit that script or install uv/pymupdf via your distro/package manager instead. 3) Prefer storing API keys in a secure secret store or OS keyring rather than echoing into a plaintext file and appending 'export' into ~/.bashrc. 4) If you must symlink scripts, inspect them first and avoid system-wide /usr/local installs unless you trust the source. 5) Verify how cron payloads deliver messages and what data they include (cron jobs could leak results or identifiers to external channels). If the author provides the missing artifacts and a trustworthy source for the uv installer, the architecture is reasonable; without that, the package is inconsistent and should be treated with caution.

Like a lobster shell, security has layers — review code before you run it.

latestvk97333mzv0ajavtn3n9dr8vs8d81vctv
883downloads
0stars
1versions
Updated 8h ago
v1.0.0
MIT-0

Research Skill

Description

Conduct open-ended research on a topic, building a living markdown document. The conversation is ephemeral; the document is what matters.

Trigger

Activate when the user wants to:

  • Research a topic, idea, or question
  • Explore something before committing to building it
  • Investigate options, patterns, or approaches
  • Create a "research doc" or "investigation"
  • Run deep async research on a complex topic

Research Directory

Each research topic gets its own folder:

~/.openclaw/workspace/research/<topic-slug>/
├── prompt.md          # Original research question/prompt
├── research.md        # Main findings (Parallel output or interactive notes)
├── research.pdf       # PDF export (when generated)
└── ...                # Any other related files (data, images, etc.)

Two Research Modes

1. Interactive Research (default)

For topics you explore together in conversation. You search, synthesize, and update the doc in real-time.

2. Deep Research (async)

For complex topics that need comprehensive investigation. Uses the Parallel AI API via parallel-research CLI. Takes minutes to hours, returns detailed markdown reports.

When to use deep research:

  • Market analysis, competitive landscape
  • Technical deep-dives requiring extensive source gathering
  • Multi-faceted questions that benefit from parallel exploration
  • When user says "deep research" or wants comprehensive coverage

Interactive Research Workflow

1. Initialize Research

  1. Create the research folder at ~/.openclaw/workspace/research/<topic-slug>/

  2. Create prompt.md with the original question:

    # <Topic Title>
    
    > <The core question or curiosity>
    
    **Started:** <date>
    
  3. Create research.md with the working structure:

    # <Topic Title>
    
    **Status:** Active Research
    **Started:** <date>
    **Last Updated:** <date>
    
    ---
    
    ## Open Questions
    - <initial questions to explore>
    
    ## Findings
    <!-- Populated as we research -->
    
    ## Options / Approaches
    <!-- If comparing solutions -->
    
    ## Resources
    <!-- Links, references, sources -->
    
    ## Next Steps
    <!-- What to explore next, or "graduate to project" -->
    
  4. Confirm with user - Show the folder was created and ask what to explore first.

2. Research Loop

For each exchange:

  1. Do the research - Web search, fetch docs, explore code
  2. Update the document - Add findings, move answered questions, add sources
  3. Show progress - Note what was added (don't repeat everything)
  4. Prompt next direction - End with a question or suggestion

Key behaviors:

  • Update existing sections over creating new ones
  • Use bullet points for findings; prose for summaries
  • Note uncertainty ("seems like", "according to X", "unverified")
  • Link to sources whenever possible

3. Synthesis Checkpoints

Every 5-10 exchanges, offer to:

  • Write a "Current Understanding" summary
  • Prune redundant findings
  • Reorganize if unwieldy
  • Check blind spots

4. Completion

When research is complete, update the status in research.md:

  • "Status: Complete" — Done, stays in place as reference
  • "Status: Ongoing" — Living doc, will be updated over time

If the research is specifically for building a project:

  • Graduate to ~/specs/<project-name>.md as a project spec
  • Or create a project directly based on findings
  • Update status to "Status: Graduated → ~/specs/..."

Most research is just research — it doesn't need to become a spec. Only graduate if you're actually building something from it.


Deep Research Workflow

1. Start Deep Research

parallel-research create "Your research question" --processor ultra --wait

Processor options:

  • lite, base, core, pro, ultra (default), ultra2x, ultra4x, ultra8x
  • Add -fast suffix for speed over depth: ultra-fast, pro-fast, etc.

Options:

  • -w, --wait — Wait for completion and show result
  • -p, --processor — Choose processor tier
  • -j, --json — Raw JSON output

2. Schedule Auto-Check (optional)

Deep research tasks take minutes to hours. You'll want to poll for results automatically rather than checking manually.

Options:

  • OpenClaw users: See OPENCLAW.md for cron-based auto-check scheduling
  • Other setups: Use any scheduler (cron, systemd timer, CI job) to periodically run parallel-research status <run_id> and parallel-research result <run_id> until complete
  • Simple approach: Just use parallel-research create "..." --wait to block until done (works for shorter tasks)

3. Manual Check (if needed)

parallel-research status <run_id>
parallel-research result <run_id>

4. Save to Research Folder

Create the research folder and save results:

~/.openclaw/workspace/research/<topic-slug>/
├── prompt.md          # Original question + run metadata
├── research.md        # Full Parallel output

prompt.md should include:

# <Topic Title>

> <Original research question>

**Run ID:** <run_id>
**Processor:** <processor>
**Started:** <date>
**Completed:** <date>

research.md contains the full Parallel output, plus any follow-up notes.


PDF Export

All PDFs go in the research folder — never save to tmp/. Whether using export-pdf, the browser pdf action, or any other method, the output path must be research/<topic-slug>/.

Use the export-pdf script to convert research docs to PDF:

export-pdf ~/.openclaw/workspace/research/<topic-slug>/research.md
# Creates: ~/.openclaw/workspace/research/<topic-slug>/research.pdf

For browser-generated PDFs (e.g. saving a webpage as PDF):

browser pdf → save to research/<topic-slug>/<descriptive-name>.pdf

Note: Tables render as stacked rows (PyMuPDF limitation). Acceptable for research docs.


Commands

  • "new research: <topic>" - Start interactive research doc
  • "deep research: <topic>" - Start async deep research
  • "show doc" / "show research" - Display current research file
  • "summarize" - Synthesis checkpoint
  • "graduate" - Move research to next phase
  • "archive" - Mark as complete reference
  • "export pdf" - Export to PDF
  • "check research" - Check status of pending deep research tasks

Document Principles

  • Atomic findings - One insight per bullet
  • Link everything - Sources, docs, repos
  • Capture context - Why did we look at this?
  • Note confidence - Use qualifiers when uncertain
  • Date important findings - Especially for fast-moving topics

Setup

See SETUP.md for first-time installation of:

  • parallel-research CLI
  • PDF export tools (pandoc, PyMuPDF)

Comments

Loading comments...