Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research Agent

v0.1.0

Autonomous deep research agent with multi-step web search, sub-agent delegation, and structured report generation. Triggered by requests for deep research, 深...

0· 88·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lingchenheiye/deep-research-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deep Research Agent" (lingchenheiye/deep-research-engine) from ClawHub.
Skill page: https://clawhub.ai/lingchenheiye/deep-research-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deep-research-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-research-engine
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description (deep research using Tavily and an LLM) matches the code and SKILL.md, but the registry metadata declares no required environment variables while both the SKILL.md and backend/agent.py clearly require a TAVILY_API_KEY and an LLM API key. That mismatch is incoherent and reduces trust in the manifest.
!
Instruction Scope
SKILL.md and the embedded prompts instruct the agent to discover URLs via Tavily, fetch full page content (using httpx), and write files such as /research_request.md and /final_report.md. Fetching arbitrary URLs and writing absolute-root paths are broad operations outside a narrow 'search-only' scope and could expose internal resources or write to unexpected locations. The instructions also mandate using sub-agents and persistent write_file() calls, which increases the agent's filesystem footprint.
Install Mechanism
There is no formal install spec in the registry (instruction-only), but the package includes requirements.txt and lists pip dependencies in SKILL.md (deepagents, tavily-python, langchain-anthropic, markdownify). These are reasonable for the stated purpose and come from normal package registries, but the lack of an install spec in the registry + included code/requirements is an inconsistency to be aware of.
!
Credentials
Requesting a Tavily API key and an LLM API key is proportionate to a web-research agent. However, the registry's 'Required env vars: none' contradicts the explicit SKILL.md and code requirements (TAVILY_API_KEY, ANTHROPIC_API_KEY/GOOGLE_API_KEY/OPENAI_API_KEY). That discrepancy is concerning and should be resolved before trusting the skill.
Persistence & Privilege
The skill does not set always:true and does not claim to modify other skills. However, the runtime instructions and prompts expect write_file() usage that writes /research_request.md and /final_report.md (absolute paths), which means it will persist files to the environment. Persisted files and autonomous sub-agents increase blast radius; run in a sandboxed environment if you proceed.
What to consider before installing
This skill looks like a genuine deep-research tool but has discrepancies and risky behaviors you should address before installing or running it: - Manifest mismatch: The registry metadata says no environment variables are required, but the SKILL.md and code require TAVILY_API_KEY and an LLM API key. Do not supply secrets until the author/registry metadata is corrected or you review the origin. - File writes: The instructions/code expect to write files (e.g., /research_request.md, /final_report.md). Confirm where files will be written (root vs current directory) and run in an isolated container or VM to avoid accidental overwrite of host files. - Arbitrary URL fetching: The agent fetches full page content using httpx for URLs returned by Tavily. That can potentially access internal network endpoints if a search result points there (SSRF-like risk). Prefer running the agent in a network-restricted environment and inspect fetched URLs if possible. - Dependency & install: The skill ships code and a requirements.txt but no formal install spec in the registry. If you install dependencies, do so in a virtualenv/container and inspect the packages (deepagents, tavily-python, langchain-anthropic, markdownify) yourself. - Trust & provenance: There is no homepage and the owner is an opaque ID. If you need to use this skill, ask the publisher for source provenance, or only run it in a sandbox. If you plan to proceed: run it in an isolated environment, avoid using high-privilege or production API keys (create scoped/test keys), and review the Tavily search results and any files the agent writes before trusting outputs.

Like a lobster shell, security has layers — review code before you run it.

latestvk9747djfae2n27dyx37pjxcgxs845pc0
88downloads
0stars
1versions
Updated 3w ago
v0.1.0
MIT-0

Deep Research Agent

When to Use

Trigger this skill when the user asks for:

  • 深度研究 / deep research on any topic
  • Comprehensive topic analysis with citations
  • Literature review or academic research
  • "Research [X]" where a thorough, multi-source report is needed
  • Comparison reports (products, technologies, methodologies)
  • Market research or competitive analysis

NOT for quick lookups — use web_search for simple questions.

Prerequisites

  1. Tavily API key (free): https://tavily.com/
  2. LLM API key: Anthropic, Google, or OpenAI

Set environment variables before first use:

export TAVILY_API_KEY="your_key"
export ANTHROPIC_API_KEY="your_key"  # or GOOGLE_API_KEY / OPENAI_API_KEY

Workflow

When triggered, follow this deep research process:

Phase 1: Plan 📋

  1. Analyze the research question
  2. Break it down into 2-5 focused sub-topics
  3. Create a research plan with specific tasks

Phase 2: Search 🔍

  1. For each sub-topic, use web_search tool to discover key information
  2. Use web_fetch to read important pages in full
  3. Take notes on key findings from each source
  4. If a sub-topic yields insufficient info, refine search queries

Phase 3: Synthesize 📝

  1. Consolidate findings from all sources
  2. Identify contradictions or gaps
  3. Form evidence-based conclusions
  4. Generate inline citations for all claims

Phase 4: Report 📄

Output a structured report with:

  • Executive Summary — Key findings at a glance
  • Background — Context and definitions
  • Detailed Analysis — Evidence-backed exploration
  • Comparison/Insights (if applicable)
  • Conclusion — Actionable takeaways
  • Sources — Numbered list of all references (inline [1], [2], etc.)

Alternative: Python Backend

For truly deep research (autonomous multi-hour sessions with Tavily), use the bundled Python script:

cd deep-research-agent/backend
pip install -r requirements.txt
python agent.py "Research topic here"

This spawns sub-agents for parallel research and writes /final_report.md.

Prompt Template (Substitute & Execute)

For quick in-session deep research (no backend needed), follow this prompt structure:

Perform deep research on: "{user_query}"

Research Guidelines:
1. Use web_search with at least 3 different query variations
2. Read at least 5 sources thoroughly via web_fetch
3. Cross-reference claims across sources
4. Cite inline with [1], [2], etc.
5. Note confidence levels for uncertain claims
6. Write a comprehensive report with sections

Comments

Loading comments...