Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research Pro Litiao

v1.0.0

Multi-source deep research agent. Searches the web, synthesizes findings, and delivers cited reports. Uses Tavily API (preferred) or DuckDuckGo (fallback).

0· 263·4 current·4 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for litiao1224/deep-research-pro-litiao.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deep Research Pro Litiao" (litiao1224/deep-research-pro-litiao) from ClawHub.
Skill page: https://clawhub.ai/litiao1224/deep-research-pro-litiao
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: TAVILY_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deep-research-pro-litiao

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-research-pro-litiao
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The README and package.json claim 'No API keys required' while SKILL.md requires TAVILY_API_KEY (preferred). That inconsistency suggests either sloppy packaging or hidden dependency on an external API not reflected in top-level metadata. The skill also references multiple local script paths (~/.openclaw/workspace/... and /home/clawdbot/...) that are outside the skill bundle — executing them is not required by the stated purpose (a self-contained research agent) unless those scripts are present, which the package does not include.
!
Instruction Scope
SKILL.md instructs the agent to run external node scripts and a system ddg script at absolute local paths and to curl arbitrary URLs and pipe HTML into a Python snippet. Because no code files are included in the skill bundle, the runtime depends on external scripts/tools that may contain arbitrary logic. It also writes reports to ~/clawd/research/[slug] and instructs spawning sub-agents with sessions_spawn — these actions are expected for research but carrying out external scripts in other system locations expands the execution surface and is unexpected for an instruction-only skill.
Install Mechanism
No install spec (instruction-only), which minimizes what the skill writes to disk itself. However, the instructions expect external scripts and tools (Tavily scripts under ~/.openclaw/workspace and a ddg script under /home/clawdbot/...) that are not provided — the agent will attempt to run code located elsewhere on disk or rely on the environment, which is a risk vector.
!
Credentials
The skill declares TAVILY_API_KEY as a required env var in SKILL.md and metadata, but README/package.json say 'No API keys required'. Requesting an API key is plausible for a 'preferred' Tavily integration, but the conflicting documentation is a red flag. Asking for a single search API key is otherwise proportionate, but the expectation that the agent will also call local scripts (not declared) increases the sensitivity: you should not provide credentials without verifying the code that will use them.
Persistence & Privilege
always:false and normal model invocation settings. The skill instructs writing reports under the user's home directory (~/clawd/research) and spawning sub-agents; these are normal for a research agent. There is no request for persistent 'always' installation or to modify other skills, but the ability to run external local scripts and spawn sessions increases the blast radius if those external scripts are untrusted.
What to consider before installing
This skill is inconsistent and needs human review before trusting it with credentials or letting it execute on your machine. Actions to consider before installing or enabling: 1) Verify whether you actually need Tavily — if not, avoid supplying TAVILY_API_KEY. 2) Inspect the external scripts the SKILL.md references (~/.openclaw/workspace/skills/tavily-search-litiao and /home/clawdbot/clawd/skills/ddg-search) — the skill will execute code there but those files are not bundled with the skill. 3) Confirm which repository and author are authoritative (metadata and README disagree). 4) If you must test, run the agent in a sandboxed environment (isolated user account or VM) without sensitive credentials. 5) Prefer an implementation that bundles or links to the exact code it expects, or replace the external-script calls with known-safe implementations you control. If you cannot verify the external scripts and metadata, treat this skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
EnvTAVILY_API_KEY
latestvk97ftn5c5y0669t345sppew7ns832mbr
263downloads
0stars
1versions
Updated 23h ago
v1.0.0
MIT-0

Deep Research Pro 🔬

A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. Prefers Tavily API for cleaner, AI-optimized results — falls back to DuckDuckGo if API key unavailable.

How It Works

When the user asks for research on any topic, follow this workflow:

Step 1: Understand the Goal (30 seconds)

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research (think before searching)

Break the topic into 3-5 research sub-questions. For example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

Preferred: Use Tavily Search (if TAVILY_API_KEY is available):

# General web search
cd ~/.openclaw/workspace/skills/tavily-search-litiao
node scripts/search.mjs "<sub-question keywords>" -n 10

# News search (for current events)
node scripts/search.mjs "<topic>" --topic news --days 3

# Deep search (for complex topics)
node scripts/search.mjs "<complex query>" --deep

Fallback: DuckDuckGo (if Tavily unavailable):

# Web search
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg "<sub-question keywords>" --max 8

# News search (for current events)
/home/clawdbot/clawd/skills/ddg-search/scripts/ddg news "<topic>" --max 5

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix web + news searches
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums
  • Tavily advantage: Returns cleaner snippets, better for AI synthesis

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

curl -sL "<url>" | python3 -c "
import sys, re
html = sys.stdin.read()
# Strip tags, get text
text = re.sub('<[^>]+>', ' ', html)
text = re.sub(r'\s+', ' ', text).strip()
print(text[:5000])
"

Read 3-5 key sources in full for depth. Don't just rely on search snippets.

Step 5: Synthesize & Write Report

Structure the report as:

# [Topic]: Deep Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Save & Deliver

Save the full report:

mkdir -p ~/clawd/research/[slug]
# Write report to ~/clawd/research/[slug]/report.md

Then deliver:

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, offer full report as file

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"

For Sub-Agent Usage

When spawning as a sub-agent, include the full research request and context:

sessions_spawn(
  task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.
  Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.
  Goal: [user's goal]
  Specific angles: [any specifics]
  Save report to ~/clawd/research/[slug]/report.md
  When done, wake the main session with key findings.",
  label: "research-[slug]",
  model: "opus"
)

Requirements

Preferred:

  • Tavily API key: TAVILY_API_KEY (get from https://tavily.com)
  • Tavily scripts: ~/.openclaw/workspace/skills/tavily-search-litiao/scripts/

Fallback (no API key needed):

  • DDG search script: /home/clawdbot/clawd/skills/ddg-search/scripts/ddg

Both methods:

  • curl (for fetching full pages)

Comments

Loading comments...