Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research Pro 1.0.2

v1.0.0

Multi-source deep research agent. Searches the web, synthesizes findings, and delivers cited reports. No API keys required.

0· 221·5 current·5 all-time
byRaidan Pro@raidan-ai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for raidan-ai/deep-research-pro-1-0-2.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deep Research Pro 1.0.2" (raidan-ai/deep-research-pro-1-0-2) from ClawHub.
Skill page: https://clawhub.ai/raidan-ai/deep-research-pro-1-0-2
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deep-research-pro-1-0-2

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-research-pro-1-0-2
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to perform web research using DuckDuckGo with no API keys, which fits the description; however the runtime instructions require a local ddg search script at /home/clawdbot/clawd/skills/ddg-search/scripts/ddg and use curl/python3. None of these binaries or config paths are listed in the skill's declared requirements, and the ddg script appears to be a dependency provided by another skill (not declared). Hard-coded absolute paths to another skill's script are disproportionate and fragile.
!
Instruction Scope
SKILL.md instructs the agent to execute commands against absolute local paths (/home/clawdbot/...), run a local ddg script, fetch arbitrary URLs with curl and pipe them into python3 -c, create directories under ~/clawd/research, and spawn sub-agents reading local skill files. These instructions reference system paths and other skills' files not declared in the metadata and grant the agent discretion to fetch and process many external URLs — all of which broaden the runtime surface beyond what's documented.
Install Mechanism
There is no install spec (instruction-only), which reduces direct install risk. However, relying on an undeclared local script (ddg) and standard tools (curl, python3) means the skill expects existing software on the host; because the script path is an arbitrary local file, execution of that script (if present) could run anything. No external downloads are specified.
Credentials
The skill requests no credentials or environment variables, which is proportionate. That said, it omits declaring required binaries (curl, python3) and required config paths (/home/clawdbot/... and ~/clawd/...), so the metadata understates what the skill actually needs and will access.
Persistence & Privilege
The skill is not set to always:true and is user-invocable (defaults). It directs saving reports under the user's home (~/clawd/research) and instructs spawning sub-agents, which are normal for a research agent. There is no explicit request to modify other skills' configurations or to remain permanently enabled.
What to consider before installing
This skill's SKILL.md hard-codes execution of a local ddg search script (/home/clawdbot/.../ddg) and uses curl/python3 but fails to declare those dependencies. Before installing: 1) Inspect the actual ddg script at the path referenced (if it exists) to verify what it runs — it could execute arbitrary commands. 2) Confirm curl and python3 are the intended tools and that running curl | python3 is acceptable in your environment. 3) Ask the author to (a) declare required binaries and config paths in metadata, (b) avoid hard-coded absolute paths or provide a packaged ddg-search dependency, or (c) switch to an explicit network API call or a bundled, auditable search implementation. 4) Run the skill in a restricted/sandboxed environment first and monitor filesystem and network activity. If you cannot inspect the ddg script or the environment where it will run, treat this skill as risky and avoid granting it execution privileges.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
latestvk974bdm9a57tzfjf7w0pjsjzn183jsy3
221downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Deep Research Pro 🔬

A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. No paid APIs required — uses DuckDuckGo search.

How It Works

When the user asks for research on any topic, follow this workflow:

Step 1: Understand the Goal (30 seconds)

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research (think before searching)

Break the topic into 3-5 research sub-questions. For example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

For EACH sub-question, run the DDG search script:

# Web search
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg "<sub-question keywords>" --max 8

# News search (for current events)
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg news "<topic>" --max 5

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix web + news searches
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

curl -sL "<url>" | python3 -c "
import sys, re
html = sys.stdin.read()
# Strip tags, get text
text = re.sub('<[^>]+>', ' ', html)
text = re.sub(r'\s+', ' ', text).strip()
print(text[:5000])
"

Read 3-5 key sources in full for depth. Don't just rely on search snippets.

Step 5: Synthesize & Write Report

Structure the report as:

# [Topic]: Deep Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Save & Deliver

Save the full report:

mkdir -p ~/clawd/research/[slug]
# Write report to ~/clawd/research/[slug]/report.md

Then deliver:

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, offer full report as file

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"

For Sub-Agent Usage

When spawning as a sub-agent, include the full research request and context:

sessions_spawn(
  task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.
  Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.
  Goal: [user's goal]
  Specific angles: [any specifics]
  Save report to ~/clawd/research/[slug]/report.md
  When done, wake the main session with key findings.",
  label: "research-[slug]",
  model: "opus"
)

Requirements

  • DDG search script: /home/clawdbot/clawd/skills/ddg-search/scripts/ddg
  • curl (for fetching full pages)
  • No API keys needed!

Comments

Loading comments...