Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deerflow

v1.1.1

Deep research and async task execution via DeerFlow LangGraph engine. Submit multi-step research tasks through a lightweight API-only Docker deployment (no f...

0· 28·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to be a lightweight API-only integration and includes Python helper scripts — python3 requirement is appropriate. However the runtime instructions require git and Docker/docker-compose (clone a GitHub repo and run `docker compose up`), yet the metadata only lists python3 and does not declare git/docker as required binaries. That mismatch reduces transparency and is disproportionate to what's declared.
Instruction Scope
SKILL.md instructs cloning https://github.com/bytedance/deer-flow, editing a .env to add model API keys (OPENAI_API_KEY, MINIMAX_API_KEY, etc.), and running containers. The skill's helper scripts only call local LangGraph endpoints, which is consistent. Be aware the deployed DeerFlow services may call back to OpenClaw (config has OPENCLAW_URL/notify), and the deployment will hold your model API keys in its .env — the instructions therefore implicitly direct the user to provide sensitive credentials to the deployed service.
!
Install Mechanism
There is no formal install spec in the skill bundle, but the documentation tells the user to git-clone a third-party repo and run its Docker compose deployment. That effectively installs and runs external code/images not provided in the skill. Running unvetted Docker images from a remote repository is a significant operational risk and should be reviewed before execution.
Credentials
The skill itself does not request environment variables or credentials (metadata lists none), which is coherent for the helper scripts. However the deployment instructions require LLM provider API keys stored in the DeerFlow .env; those secrets are necessary for the deployed service to operate but are not managed by the skill. Users should not assume the skill will safeguard those keys — they will live in the deployed stack's environment and/or images.
Persistence & Privilege
The skill does not request always:true or elevated platform privileges. It is user-invocable and can be called autonomously (normal). The skill's files do not modify other skills or agent configs. No persistence or privilege escalation is requested by the bundle.
What to consider before installing
This skill appears to do what it says (submit and poll DeerFlow LangGraph tasks), but before installing: 1) note the SKILL.md requires git and Docker/docker-compose even though the metadata omits them — install and review those prerequisites first; 2) the instructions clone and run a third-party GitHub repo and its Docker images — review the repository and the images (Dockerfile, image sources, tags) for trustworthiness before running them; 3) the DeerFlow deployment requires LLM API keys stored in a .env (OPENAI_API_KEY, MINIMAX_API_KEY, etc.) — treat those as sensitive secrets and avoid exposing them to untrusted images or public networks; 4) consider running the stack in an isolated environment (VM) and inspect network callbacks (OPENCLAW_URL/notify) if you want to control what the deployed services can reach. If you need greater assurance, ask the publisher for signed releases or a reproducible image provenance and for the skill metadata to declare docker/git as required binaries.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ey7hc2t5tk0wcw1epk8q9p1847vpd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

OSLinux · macOS
Binspython3

SKILL.md

DeerFlow Integration

Overview

DeerFlow is a LangGraph-based deep research engine that chains web search, reasoning, and synthesis into structured reports. This skill provides OpenClaw integration for submitting and monitoring research tasks.

Architecture note: This skill targets the minimal API-only deployment — no Nginx, no frontend. Only two Docker services run:

ServicePortRole
deer-flow-gateway8001Business logic & channel glue
deer-flow-langgraph2024Core agent orchestration (the only endpoint this skill calls)

This is the recommended deployment for resource-constrained environments (VPS, small servers). All task submission is done by calling the LangGraph API directly.

Quick Start

/deerflow <research topic>

Example: /deerflow Analyze the Chinese AI companion market

The skill returns a thread_id and run_id for status tracking.

Minimal Docker Deployment (API-Only)

1. Clone and configure

git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
cp .env.example .env

Edit .env with your model API keys:

# Required: at least one LLM provider
OPENAI_API_KEY=sk-...
# Or MiniMax
MINIMAX_API_KEY=...
MINIMAX_API_BASE=https://api.minimax.com

# Optional: Tavily for web search
TAVILY_API_KEY=tvly-...

2. Start API-only services

# No nginx, no frontend — just gateway + langgraph
docker compose up -d deer-flow-gateway deer-flow-langgraph

Verify:

curl http://localhost:2024/openapi.json | head   # should return OpenAPI spec
curl http://localhost:8001/health               # should return 200

3. Submit your first task

curl -X POST http://localhost:2024/threads \
  -H "Content-Type: application/json" \
  -d '{}'
# Returns: { "thread_id": "..." }

Then submit a task:

curl -X POST http://localhost:2024/threads/{thread_id}/runs \
  -H "Content-Type: application/json" \
  -d '{
    "assistant_id": "lead_agent",
    "input": {
      "messages": [{
        "type": "human",
        "content": [{ "type": "text", "text": "Your research query here" }]
      }]
    },
    "config": {
      "recursion_limit": 200,
      "configurable": {
        "model_name": "minimax-m2.7",
        "thinking_enabled": true,
        "is_plan_mode": false,
        "subagent_enabled": false
      }
    }
  }'
# Returns: { "run_id": "..." }

Poll for completion:

curl http://localhost:2024/threads/{thread_id}/runs/{run_id}

When status = success, fetch results:

curl http://localhost:2024/threads/{thread_id}/history

Model Configuration

Set model_name in the configurable block:

ModelConfig ValueNotes
MiniMax M2.7minimax-m2.7Default, reasoning-capable
MiniMax M2.5minimax-m2.5Lighter alternative
KimikimiRequires DeerFlow .env to have Kimi credentials

Set thinking_enabled: true to enable extended chain-of-thought reasoning (recommended for research tasks).

Skill Scripts

This skill includes two helper scripts in scripts/:

submit_task.py

cd ~/.openclaw/workspace/skills/deerflow
python3 scripts/submit_task.py "Your research topic"
# Returns thread_id and run_id

check_status.py

python3 scripts/check_status.py <thread_id> <run_id>
# Polls until completion, then prints the full report

OpenClaw Tool Injection

The skill is auto-injected into OpenClaw as the deerflow tool. OpenClaw agents call it directly when the user triggers the keyword.

Resource Comparison

DeploymentServicesRAM Est.Use Case
API-only (this skill)gateway + langgraph~2 GBSelf-hosted agents, VPS
Full stack+ nginx + frontend~4+ GBTeam shared UI

Troubleshooting

LangGraph returns 404

Verify the container is healthy:

docker ps | grep langgraph
curl http://localhost:2024/openapi.json

Task hangs or returns "error" status

Check LangGraph logs:

docker logs deer-flow-langgraph --tail 50

Model API errors

Ensure credentials in DeerFlow's .env are valid and the model_name in your request matches a configured provider.

File Structure

skills/deerflow/
├── SKILL.md           # This file
└── scripts/
    ├── submit_task.py  # Submit a research task
    └── check_status.py # Poll and retrieve results

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…