Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deepagents Implementation

v1.0.1

Implements agents using Deep Agents. Use when building agents with create_deep_agent, configuring backends, defining subagents, adding middleware, or setting...

0· 218·1 current·1 all-time
byKevin Anderson@anderskev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for anderskev/deepagents-implementation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deepagents Implementation" (anderskev/deepagents-implementation) from ClawHub.
Skill page: https://clawhub.ai/anderskev/deepagents-implementation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deepagents-implementation

ClawHub CLI

Package manager switcher

npx clawhub@latest install deepagents-implementation
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name and description match the SKILL.md: it is an implementation guide for building agents with create_deep_agent, backends, subagents, middleware, and persistence. The features shown (backends, streaming, subagents, tools) are coherent with the stated purpose.
!
Instruction Scope
The SKILL.md instructs the agent to configure FilesystemBackend (which enables an `execute` tool for running shell commands), to read/write arbitrary absolute file paths, and to wire up persistent checkpointers and stores (e.g., PostgresSaver, PostgresStore). It also references calling external clients (e.g., tavily_client) and model initializers without defining how credentials or clients are provided. These instructions grant broad access to system files, shell execution, and external endpoints; they also rely on configuration (DB URLs, API clients) that the skill does not declare or request.
Install Mechanism
Instruction-only skill with no install spec and no code files. This minimizes install-time risk (nothing is written or downloaded by the skill itself).
!
Credentials
Registry metadata lists no required environment variables, but SKILL.md examples and production snippets clearly expect secrets/configuration: DATABASE_URL (PostgresSaver/PostgresStore), model provider credentials (OpenAI/Anthropic API keys or similar), and references to external clients (tavily_client) that likely require API keys. The skill asking for persistent backends and shell execution implies access to sensitive credentials and filesystem paths, yet the skill declares none—this mismatch reduces transparency and is concerning.
Persistence & Privilege
The skill does not request always:true and does not alter other skills' configs. However, it documents using persistent backends (PostgresSaver/PostgresStore) which, if configured by the user, will persist agent state across conversations. Combined with FilesystemBackend and the `execute` tool, this can create a long-lived agent with disk and DB persistence — a capability users should opt-into knowingly.
What to consider before installing
This skill appears to be a legitimate implementation guide for Deep Agents, but its runtime instructions expect sensitive configuration that the registry metadata does not disclose. Before installing or using it: 1) Do not point FilesystemBackend root_dir at sensitive system or home directories; prefer a sandbox/test directory. 2) Avoid supplying production DATABASE_URL or API keys — use throwaway or dev credentials when testing. 3) Be aware that enabling FilesystemBackend can allow the agent to run shell commands via `execute`; only enable that when you understand and trust the agent's behavior. 4) Ask the publisher for explicit lists of required environment variables and for details about any external clients (e.g., tavily_client) and where binaries/code are expected to come from. 5) If you need persistence, review and control the DB connection/permissions carefully. These mismatches (no declared env vars but clear need for keys/DB URLs and shell access) are the reason this skill is flagged as suspicious.

Like a lobster shell, security has layers — review code before you run it.

latestvk9761dkthhrq8c0zgzj8zg57wx85a9c7
218downloads
0stars
2versions
Updated 13h ago
v1.0.1
MIT-0

Deep Agents Implementation

Core Concepts

Deep Agents provides a batteries-included agent harness built on LangGraph:

  • create_deep_agent: Factory function that creates a configured agent
  • Middleware: Injected capabilities (filesystem, todos, subagents, summarization)
  • Backends: Pluggable file storage (state, filesystem, store, composite)
  • Subagents: Isolated task execution via the task tool

The agent returned is a compiled LangGraph StateGraph, compatible with streaming, checkpointing, and LangGraph Studio.

Essential Imports

# Core
from deepagents import create_deep_agent

# Subagents
from deepagents import CompiledSubAgent

# Backends
from deepagents.backends import (
    StateBackend,       # Ephemeral (default)
    FilesystemBackend,  # Real disk
    StoreBackend,       # Persistent cross-thread
    CompositeBackend,   # Route paths to backends
)

# LangGraph (for checkpointing, store, streaming)
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.checkpoint.postgres import PostgresSaver
from langgraph.store.memory import InMemoryStore

# LangChain (for custom models, tools)
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool

Basic Usage

Minimal Agent

from deepagents import create_deep_agent

# Uses Claude Sonnet 4 by default
agent = create_deep_agent()

result = agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})

With Custom Tools

from langchain_core.tools import tool
from deepagents import create_deep_agent

@tool
def web_search(query: str) -> str:
    """Search the web for information."""
    return tavily_client.search(query)

agent = create_deep_agent(
    tools=[web_search],
    system_prompt="You are a research assistant. Search the web to answer questions.",
)

result = agent.invoke({"messages": [{"role": "user", "content": "What is LangGraph?"}]})

With Custom Model

from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent

# OpenAI
model = init_chat_model("openai:gpt-4o")

# Or Anthropic with custom settings
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model_name="claude-sonnet-4-5-20250929", max_tokens=8192)

agent = create_deep_agent(model=model)

With Checkpointing (Persistence)

from langgraph.checkpoint.memory import InMemorySaver
from deepagents import create_deep_agent

agent = create_deep_agent(checkpointer=InMemorySaver())

# Must provide thread_id with checkpointer
config = {"configurable": {"thread_id": "user-123"}}
result = agent.invoke({"messages": [...]}, config)

# Resume conversation
result = agent.invoke({"messages": [{"role": "user", "content": "Follow up"}]}, config)

Streaming

The agent supports all LangGraph stream modes.

Stream Updates

for chunk in agent.stream(
    {"messages": [{"role": "user", "content": "Write a report"}]},
    stream_mode="updates"
):
    print(chunk)  # {"node_name": {"key": "value"}}

Stream Messages (Token-by-Token)

for chunk in agent.stream(
    {"messages": [{"role": "user", "content": "Explain quantum computing"}]},
    stream_mode="messages"
):
    # Real-time token streaming
    print(chunk.content, end="", flush=True)

Async Streaming

async for chunk in agent.astream(
    {"messages": [...]},
    stream_mode="updates"
):
    print(chunk)

Multiple Stream Modes

for mode, chunk in agent.stream(
    {"messages": [...]},
    stream_mode=["updates", "messages"]
):
    if mode == "messages":
        print("Token:", chunk.content)
    else:
        print("Update:", chunk)

Backend Configuration

StateBackend (Default - Ephemeral)

Files stored in agent state, persist within thread only.

# Implicit - this is the default
agent = create_deep_agent()

# Explicit
from deepagents.backends import StateBackend
agent = create_deep_agent(backend=lambda rt: StateBackend(rt))

FilesystemBackend (Real Disk)

Read/write actual files on disk. Enables execute tool for shell commands.

from deepagents.backends import FilesystemBackend

agent = create_deep_agent(
    backend=FilesystemBackend(root_dir="/path/to/project"),
)

StoreBackend (Persistent Cross-Thread)

Uses LangGraph Store for persistence across conversations.

from langgraph.store.memory import InMemoryStore
from deepagents.backends import StoreBackend

store = InMemoryStore()

agent = create_deep_agent(
    backend=lambda rt: StoreBackend(rt),
    store=store,  # Required for StoreBackend
)

CompositeBackend (Hybrid Routing)

Route different paths to different backends.

from langgraph.store.memory import InMemoryStore
from deepagents.backends import CompositeBackend, StateBackend, StoreBackend

store = InMemoryStore()

agent = create_deep_agent(
    backend=CompositeBackend(
        default=StateBackend(),           # /workspace/* → ephemeral
        routes={
            "/memories/": StoreBackend(store=store),     # persistent
            "/preferences/": StoreBackend(store=store), # persistent
        },
    ),
    store=store,
)

# Files under /memories/ persist across all conversations
# Files under /workspace/ are ephemeral per-thread

Subagents

Using the Default General-Purpose Agent

By default, a general-purpose subagent is available with all main agent tools.

agent = create_deep_agent(tools=[web_search])

# The agent can now delegate via the `task` tool:
# task(subagent_type="general-purpose", prompt="Research topic X in depth")

Defining Custom Subagents

from deepagents import create_deep_agent

research_agent = {
    "name": "researcher",
    "description": "Conducts deep research on complex topics with web search",
    "system_prompt": """You are an expert researcher.
    Search thoroughly, cross-reference sources, and synthesize findings.""",
    "tools": [web_search, document_reader],
}

code_agent = {
    "name": "coder",
    "description": "Writes, reviews, and debugs code",
    "system_prompt": "You are an expert programmer. Write clean, tested code.",
    "tools": [code_executor, linter],
    "model": "openai:gpt-4o",  # Optional: different model per subagent
}

agent = create_deep_agent(
    subagents=[research_agent, code_agent],
    system_prompt="Delegate research to the researcher and coding to the coder.",
)

Pre-compiled LangGraph Subagents

Use existing LangGraph graphs as subagents.

from deepagents import CompiledSubAgent, create_deep_agent
from langgraph.prebuilt import create_react_agent

# Existing graph
custom_graph = create_react_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[specialized_tool],
    prompt="Custom workflow instructions",
)

agent = create_deep_agent(
    subagents=[CompiledSubAgent(
        name="custom-workflow",
        description="Runs my specialized analysis workflow",
        runnable=custom_graph,
    )]
)

Subagent with Custom Middleware

from langchain.agents.middleware import AgentMiddleware

class LoggingMiddleware(AgentMiddleware):
    def transform_response(self, response):
        print(f"Subagent response: {response}")
        return response

agent_spec = {
    "name": "logged-agent",
    "description": "Agent with extra logging",
    "system_prompt": "You are helpful.",
    "tools": [],
    "middleware": [LoggingMiddleware()],  # Added after default middleware
}

Human-in-the-Loop

Basic Interrupt Configuration

Pause execution before specific tools for human approval.

from deepagents import create_deep_agent

agent = create_deep_agent(
    tools=[send_email, delete_file, web_search],
    interrupt_on={
        "send_email": True,      # Simple interrupt
        "delete_file": True,     # Require approval before delete
        # web_search not listed - runs without approval
    },
    checkpointer=checkpointer,   # Required for interrupts
)

Interrupt with Options

agent = create_deep_agent(
    tools=[send_email],
    interrupt_on={
        "send_email": {
            "allowed_decisions": ["approve", "edit", "reject"]
        },
    },
    checkpointer=checkpointer,
)

# Invoke - will pause at send_email
config = {"configurable": {"thread_id": "user-123"}}
result = agent.invoke({"messages": [...]}, config)

# Check state
state = agent.get_state(config)
if state.next:  # Has pending interrupt
    # Resume with approval
    from langgraph.types import Command
    agent.invoke(Command(resume={"approved": True}), config)

    # Or resume with edit
    agent.invoke(Command(resume={"edited_args": {"to": "new@email.com"}}), config)

    # Or reject
    agent.invoke(Command(resume={"rejected": True}), config)

Interrupt on Subagent Tools

# Interrupts apply to subagents too
agent = create_deep_agent(
    subagents=[research_agent],
    interrupt_on={
        "web_search": True,  # Interrupt even when subagent calls it
    },
    checkpointer=checkpointer,
)

Custom Middleware

Middleware Structure

from langchain.agents.middleware.types import (
    AgentMiddleware,
    ModelRequest,
    ModelResponse,
)
from langchain_core.tools import tool

class MyMiddleware(AgentMiddleware):
    # Tools to inject
    tools = []

    # System prompt content to inject
    system_prompt = ""

    def transform_request(self, request: ModelRequest) -> ModelRequest:
        """Modify request before sending to model."""
        return request

    def transform_response(self, response: ModelResponse) -> ModelResponse:
        """Modify response after receiving from model."""
        return response

Injecting Tools via Middleware

from langchain_core.tools import tool

@tool
def get_current_time() -> str:
    """Get the current time."""
    from datetime import datetime
    return datetime.now().isoformat()

class TimeMiddleware(AgentMiddleware):
    tools = [get_current_time]
    system_prompt = "You have access to get_current_time for time-sensitive tasks."

agent = create_deep_agent(middleware=[TimeMiddleware()])

Context Injection Middleware

class UserContextMiddleware(AgentMiddleware):
    def __init__(self, user_preferences: dict):
        self.user_preferences = user_preferences

    @property
    def system_prompt(self):
        return f"User preferences: {self.user_preferences}"

agent = create_deep_agent(
    middleware=[UserContextMiddleware({"theme": "dark", "language": "en"})]
)

Response Logging Middleware

import logging

class LoggingMiddleware(AgentMiddleware):
    def transform_response(self, response: ModelResponse) -> ModelResponse:
        logging.info(f"Agent response: {response.messages[-1].content[:100]}...")
        return response

agent = create_deep_agent(middleware=[LoggingMiddleware()])

MCP Tool Integration

Connect MCP (Model Context Protocol) servers to provide additional tools.

from langchain_mcp_adapters.client import MultiServerMCPClient
from deepagents import create_deep_agent

async def main():
    mcp_client = MultiServerMCPClient({
        "filesystem": {
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"],
        },
        "github": {
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-github"],
            "env": {"GITHUB_TOKEN": os.environ["GITHUB_TOKEN"]},
        },
    })

    mcp_tools = await mcp_client.get_tools()

    agent = create_deep_agent(tools=mcp_tools)

    async for chunk in agent.astream(
        {"messages": [{"role": "user", "content": "List my repos"}]}
    ):
        print(chunk)

Implementation gates

Use this sequenced table for setups that touch real disk, human interrupts, persistence, or MCP subprocesses (skip for minimal create_deep_agent() smoke tests).

StepPass condition
1FilesystemBackend or execute: root_dir is deliberately scoped (not an accidental home or filesystem root); smoke-test in a disposable directory before trusting production paths.
2interrupt_on / resume: A checkpointer is configured; every invoke / astream that may interrupt includes config with configurable["thread_id"]; after a pause, agent.get_state(config) shows pending interrupt state before Command(resume=...), and the resume payload matches tool options (e.g. allowed_decisions) when set.
3Store, PostgreSQL saver, MCP: Credentials come from environment or a secret manager, not committed source; MCP command / args / required env keys match what the deployment host actually provides (e.g. npx, tokens).

Additional References

For detailed reference documentation, see:

  • Built-in Tools Reference - Complete list of tools available on every agent (filesystem, task management, subagent delegation) with path requirements
  • Common Patterns - Production-ready examples including research agents with memory, code assistants with disk access, multi-specialist teams, and production PostgreSQL setup

Comments

Loading comments...