Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Research

Conduct open-ended research on a topic, building a living markdown document. Supports interactive and deep research modes.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 2.7k · 14 current installs · 14 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to run deep async research via a 'parallel-research' CLI and the Parallel AI API — that is coherent with a 'research' purpose. However the package metadata declares no required credentials or binaries, while the docs repeatedly reference a PARALLEL_API_KEY and a bundled CLI in scripts/ (to be symlinked). The repo does not include those scripts, so the claimed capabilities depend on external artifacts not provided or declared.
!
Instruction Scope
SKILL.md/SETUP.md instructs creating files under ~/.openclaw/workspace/research and scheduling cron jobs to poll results and deliver them back to channels (e.g., Discord). That behavior is within a research tool's scope, but the instructions also tell the user to run external installers and to expose an API key via exported env vars in ~/.bashrc. The cron payload includes a 'channel' and 'to' field used to post results externally — make sure you understand where outputs will be sent.
!
Install Mechanism
Although the registry lists no automated install, SETUP.md instructs the user to symlink scripts from ~/.openclaw/skills/research/scripts/ and to run a remote installer: 'curl -LsSf https://astral.sh/uv/install.sh | sh'. Download-and-execute from an external script is high-risk. Additionally, the instructions reference CLI scripts that are not included in the skill bundle, creating ambiguity about the source and integrity of those binaries.
!
Credentials
Registry metadata claims no required environment variables, but OPENCLAW.md and SETUP.md both reference PARALLEL_API_KEY (and recommend storing it in ~/.secrets and exporting it via ~/.bashrc). Requesting and loading an API key is reasonable for calling a third-party research API, but it should be declared in metadata and the install guidance should avoid insecure patterns (e.g., writing secrets into shell RC files or unclear script locations).
Persistence & Privilege
always:false (normal). The skill suggests setting up scheduled checks (cron jobs) that will run later and deliver results back to a channel. That creates background activity and outbound posting of research results; it's expected for async research but the user should confirm where results will be delivered and that the scheduled jobs won't leak sensitive content.
Scan Findings in Context
[NO_CODE_FILES] expected: The scanner found no code to analyze because this is an instruction-only skill (only SKILL.md, SETUP.md, OPENCLAW.md are present). Absence of code does not imply safety; the runtime instructions themselves include external installers and secret handling that must be reviewed.
What to consider before installing
This skill looks like a reasonable research assistant, but there are things that don't add up and some risky install steps. Before installing: 1) Ask the author for the missing 'parallel-research' and 'export-pdf' scripts (they are referenced but not included). Verify their source and checksum; do NOT symlink or run binaries from an untrusted location. 2) Don't run remote curl | sh installers (e.g., astral.sh) without reviewing the script. Prefer installing uv/pandoc from trusted package managers or official release pages. 3) Be careful how you store PARALLEL_API_KEY — avoid appending export commands into ~/.bashrc if you can use a safer secret store; if you must, restrict file permissions and understand who can read your shell config. 4) Review the cron payload: confirm the 'channel' and 'to' destinations where results will be posted so you don't inadvertently share sensitive results externally. 5) If you plan to use deep research, confirm what data (full scraped outputs, attachments) will be sent to Parallel AI and whether that's acceptable for your use case. If the author cannot provide the missing CLI scripts or a trustworthy installation source, treat the skill as incomplete and avoid installing the recommended binaries.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.1.0
Download zip
latestvk975b7xepj02wa6gn7fa7xmkj981v5py

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Research Skill

Description

Conduct open-ended research on a topic, building a living markdown document. The conversation is ephemeral; the document is what matters.

Trigger

Activate when the user wants to:

  • Research a topic, idea, or question
  • Explore something before committing to building it
  • Investigate options, patterns, or approaches
  • Create a "research doc" or "investigation"
  • Run deep async research on a complex topic

Research Directory

Each research topic gets its own folder:

~/.openclaw/workspace/research/<topic-slug>/
├── prompt.md          # Original research question/prompt
├── research.md        # Main findings (Parallel output or interactive notes)
├── research.pdf       # PDF export (when generated)
└── ...                # Any other related files (data, images, etc.)

Two Research Modes

1. Interactive Research (default)

For topics you explore together in conversation. You search, synthesize, and update the doc in real-time.

2. Deep Research (async)

For complex topics that need comprehensive investigation. Uses the Parallel AI API via parallel-research CLI. Takes minutes to hours, returns detailed markdown reports.

When to use deep research:

  • Market analysis, competitive landscape
  • Technical deep-dives requiring extensive source gathering
  • Multi-faceted questions that benefit from parallel exploration
  • When user says "deep research" or wants comprehensive coverage

Interactive Research Workflow

1. Initialize Research

  1. Create the research folder at ~/.openclaw/workspace/research/<topic-slug>/

  2. Create prompt.md with the original question:

    # <Topic Title>
    
    > <The core question or curiosity>
    
    **Started:** <date>
    
  3. Create research.md with the working structure:

    # <Topic Title>
    
    **Status:** Active Research
    **Started:** <date>
    **Last Updated:** <date>
    
    ---
    
    ## Open Questions
    - <initial questions to explore>
    
    ## Findings
    <!-- Populated as we research -->
    
    ## Options / Approaches
    <!-- If comparing solutions -->
    
    ## Resources
    <!-- Links, references, sources -->
    
    ## Next Steps
    <!-- What to explore next, or "graduate to project" -->
    
  4. Confirm with user - Show the folder was created and ask what to explore first.

2. Research Loop

For each exchange:

  1. Do the research - Web search, fetch docs, explore code
  2. Update the document - Add findings, move answered questions, add sources
  3. Show progress - Note what was added (don't repeat everything)
  4. Prompt next direction - End with a question or suggestion

Key behaviors:

  • Update existing sections over creating new ones
  • Use bullet points for findings; prose for summaries
  • Note uncertainty ("seems like", "according to X", "unverified")
  • Link to sources whenever possible

3. Synthesis Checkpoints

Every 5-10 exchanges, offer to:

  • Write a "Current Understanding" summary
  • Prune redundant findings
  • Reorganize if unwieldy
  • Check blind spots

4. Completion

When research is complete, update the status in research.md:

  • "Status: Complete" — Done, stays in place as reference
  • "Status: Ongoing" — Living doc, will be updated over time

If the research is specifically for building a project:

  • Graduate to ~/specs/<project-name>.md as a project spec
  • Or create a project directly based on findings
  • Update status to "Status: Graduated → ~/specs/..."

Most research is just research — it doesn't need to become a spec. Only graduate if you're actually building something from it.


Deep Research Workflow

1. Start Deep Research

parallel-research create "Your research question" --processor ultra --wait

Processor options:

  • lite, base, core, pro, ultra (default), ultra2x, ultra4x, ultra8x
  • Add -fast suffix for speed over depth: ultra-fast, pro-fast, etc.

Options:

  • -w, --wait — Wait for completion and show result
  • -p, --processor — Choose processor tier
  • -j, --json — Raw JSON output

2. Schedule Auto-Check (optional)

Deep research tasks take minutes to hours. You'll want to poll for results automatically rather than checking manually.

Options:

  • OpenClaw users: See OPENCLAW.md for cron-based auto-check scheduling
  • Other setups: Use any scheduler (cron, systemd timer, CI job) to periodically run parallel-research status <run_id> and parallel-research result <run_id> until complete
  • Simple approach: Just use parallel-research create "..." --wait to block until done (works for shorter tasks)

3. Manual Check (if needed)

parallel-research status <run_id>
parallel-research result <run_id>

4. Save to Research Folder

Create the research folder and save results:

~/.openclaw/workspace/research/<topic-slug>/
├── prompt.md          # Original question + run metadata
├── research.md        # Full Parallel output

prompt.md should include:

# <Topic Title>

> <Original research question>

**Run ID:** <run_id>
**Processor:** <processor>
**Started:** <date>
**Completed:** <date>

research.md contains the full Parallel output, plus any follow-up notes.


PDF Export

All PDFs go in the research folder — never save to tmp/. Whether using export-pdf, the browser pdf action, or any other method, the output path must be research/<topic-slug>/.

Use the export-pdf script to convert research docs to PDF:

export-pdf ~/.openclaw/workspace/research/<topic-slug>/research.md
# Creates: ~/.openclaw/workspace/research/<topic-slug>/research.pdf

For browser-generated PDFs (e.g. saving a webpage as PDF):

browser pdf → save to research/<topic-slug>/<descriptive-name>.pdf

Note: Tables render as stacked rows (PyMuPDF limitation). Acceptable for research docs.


Commands

  • "new research: <topic>" - Start interactive research doc
  • "deep research: <topic>" - Start async deep research
  • "show doc" / "show research" - Display current research file
  • "summarize" - Synthesis checkpoint
  • "graduate" - Move research to next phase
  • "archive" - Mark as complete reference
  • "export pdf" - Export to PDF
  • "check research" - Check status of pending deep research tasks

Document Principles

  • Atomic findings - One insight per bullet
  • Link everything - Sources, docs, repos
  • Capture context - Why did we look at this?
  • Note confidence - Use qualifiers when uncertain
  • Date important findings - Especially for fast-moving topics

Setup

See SETUP.md for first-time installation of:

  • parallel-research CLI
  • PDF export tools (pandoc, PyMuPDF)

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…