Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

110 Deep Research Pro

v1.0.0

Multi-source deep research agent. Searches the web, synthesizes findings, and delivers cited reports. No API keys required.

0· 64·0 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for smallkeyboy/110-deep-research-pro.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "110 Deep Research Pro" (smallkeyboy/110-deep-research-pro) from ClawHub.
Skill page: https://clawhub.ai/smallkeyboy/110-deep-research-pro
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install 110-deep-research-pro

ClawHub CLI

Package manager switcher

npx clawhub@latest install 110-deep-research-pro
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to perform web research without API keys, which fits the description. However it requires a specific external script at /home/clawdbot/clawd/skills/ddg-search/scripts/ddg (not included) and references a local scripts/research CLI that is not present in the file manifest. Requiring another skill's executable via a hard-coded path is disproportionate and fragile — a legitimate research skill would include or document that dependency clearly.
!
Instruction Scope
SKILL.md instructs the agent to run absolute-path binaries and to fetch arbitrary URLs via curl and a python one-liner (which strips HTML with regex). It also tells sub-agents to read specific local paths (/home/clawdbot/...) and to save reports under ~/clawd/research. While fetching pages and saving reports is expected for research, the use of absolute paths to other skills, spawning sub-agents with explicit file reads, and executing fetched content-processing one-liners widen the operational scope and could lead to surprising behavior if those paths or scripts do not match the host environment.
Install Mechanism
There is no install spec (instruction-only), which minimizes on-disk installation risk. However the README/package.json advertise scripts and auto-installing dependencies (uv, scripts/research) that are not present in the package manifest, indicating packaging inconsistencies that should be resolved.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. That is proportionate to a web-research skill that uses public search and curl; there are no requests for unrelated secrets.
Persistence & Privilege
always:false and no attempt to modify other skills or system-wide configs. The skill saves reports into the user's home directory (~/clawd/research) which is reasonable for its purpose. Spawning sub-agents is allowed by default and not, by itself, a red flag.
What to consider before installing
Do not install blindly. Before using the skill, verify these points: 1) Check that the referenced ddg-search script actually exists at /home/clawdbot/clawd/skills/ddg-search/scripts/ddg (or update SKILL.md to a relative or documented dependency). 2) Confirm the repository and author — README git URL (https://github.com/parags/deep-research-pro) and homepage/owner metadata are inconsistent with registry metadata; this could be a packaging error. 3) The package advertises scripts/research and auto-install behavior, but those files are missing from the manifest — ask the author for a complete package or include the missing scripts. 4) Review the curl + python one-liner logic (it strips HTML via regex) and consider running fetches in a sandbox — fetching arbitrary URLs and processing their contents can have security and privacy implications. 5) If you will allow the agent to spawn sub-agents, ensure your agent's policy and the 'sessions_spawn' mechanism are safe in your environment (model name and read-paths are hard-coded). If you cannot confirm these items, run the skill in an isolated environment or request a corrected, complete release from the maintainer.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
latestvk971kedtd81mtq1pxa8g2hn2w584z95d
64downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Deep Research Pro 🔬

A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. No paid APIs required — uses DuckDuckGo search.

How It Works

When the user asks for research on any topic, follow this workflow:

Step 1: Understand the Goal (30 seconds)

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research (think before searching)

Break the topic into 3-5 research sub-questions. For example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

For EACH sub-question, run the DDG search script:

# Web search
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg "<sub-question keywords>" --max 8

# News search (for current events)
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg news "<topic>" --max 5

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix web + news searches
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

curl -sL "<url>" | python3 -c "
import sys, re
html = sys.stdin.read()
# Strip tags, get text
text = re.sub('<[^>]+>', ' ', html)
text = re.sub(r'\s+', ' ', text).strip()
print(text[:5000])
"

Read 3-5 key sources in full for depth. Don't just rely on search snippets.

Step 5: Synthesize & Write Report

Structure the report as:

# [Topic]: Deep Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Save & Deliver

Save the full report:

mkdir -p ~/clawd/research/[slug]
# Write report to ~/clawd/research/[slug]/report.md

Then deliver:

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, offer full report as file

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"

For Sub-Agent Usage

When spawning as a sub-agent, include the full research request and context:

sessions_spawn(
  task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.
  Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.
  Goal: [user's goal]
  Specific angles: [any specifics]
  Save report to ~/clawd/research/[slug]/report.md
  When done, wake the main session with key findings.",
  label: "research-[slug]",
  model: "opus"
)

Requirements

  • DDG search script: /home/clawdbot/clawd/skills/ddg-search/scripts/ddg
  • curl (for fetching full pages)
  • No API keys needed!

Comments

Loading comments...