deerflow-install-master
Analysis
This is a coherent DeerFlow installation guide, but it would set up a long-running super-agent with shell/file tools, external credentials, and remote dependencies, so it needs careful review before use.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
tools:\n - name: read_file\n - name: write_file\n - name: str_replace\n - name: bash
The DeerFlow configuration enables file read/write, string replacement, and bash tools by default. These are powerful agent capabilities, and the artifact does not clearly define path limits, approval requirements, or rollback controls.
nohup .venv/bin/langgraph dev ... --port 2024 ... > /tmp/langgraph.log 2>&1 &
The guide starts DeerFlow services in the background and later recommends nohup or systemd for service keepalive. This persistence is disclosed and purpose-aligned, but it can keep agent-facing services running after the install task ends.
git clone https://github.com/bytedance/deer-flow.git ... pip install fastapi uvicorn httpx langchain langchain-openai ...
The installer uses a live GitHub clone and unpinned Python package installs. That is normal for an installation guide, but it means the installed code and dependencies may change over time.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
OPENROUTER_API_KEY=your-key-here\nTAVILY_API_KEY=your-key-here\nINFOQUEST_API_KEY=your-key-here
The guide asks users to place model/search provider API keys in a .env file. These credentials are expected for DeerFlow integrations, but the registry metadata declares no credential requirements.
sudo usermod -aG docker $USER
The troubleshooting guidance suggests adding the user to the Docker group. That can be appropriate for Docker deployments, but Docker group access is a powerful local privilege.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
base_url: https://openrouter.ai/api/v1 ... chat(message) → 调用 Gateway /api/chat
The setup routes chat/model activity through a local Gateway and an external model provider. This is disclosed and purpose-aligned, but the artifact does not describe authentication or data-handling boundaries for the Gateway/API flow.
