Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Strands Agents SDK

Build and run Python-based AI agents using the AWS Strands SDK. Use when you need to create autonomous agents, multi-agent workflows, custom tools, or integrate with MCP servers. Supports Ollama (local), Anthropic, OpenAI, Bedrock, and other model providers. Use for agent scaffolding, tool creation, and running agent tasks programmatically.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.5k · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill is an SDK for building Python agents and the included scripts, cheatsheet, and examples are consistent with that purpose. However, the registry metadata declares no required environment variables or packages while the SKILL.md and code clearly rely on many optional dependencies and provider credentials (AWS Bedrock as default, Anthropic, OpenAI, Mistral, strands_tools, etc.). The absence of these expected requirements in the manifest is an incoherence worth noting.
!
Instruction Scope
The SKILL.md and scaffold templates instruct creation of tools that can read arbitrary filesystem paths and run arbitrary shell commands (see run_command using subprocess.run(shell=True)). That behavior aligns with an agent SDK (agents often need tooling), but it grants powerful local capabilities. The instructions also default to Bedrock (AWS) and document usage of MCP transports and network endpoints. The SKILL.md does not limit or caution about these powerful tool defaults in the manifest, giving the agent broad discretion to access local files, execute commands, and call external services.
Install Mechanism
There is no automated install spec in the registry entry (instruction-only), which reduces the risk of hidden downloads. The SKILL.md recommends installing packages via pipx/pip (strands-agents, strands-agents-tools), which is expected for this SDK. Because there is no forced download/extract step in the skill manifest, nothing in the registry will write arbitrary archives during install — but installing the recommended Python packages is required to use many features.
!
Credentials
The registry metadata declares no required environment variables, yet SKILL.md and templates clearly reference several provider credentials and env vars: AWS credentials (AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY and optional AWS_REGION) for Bedrock/SageMaker/S3, ANTHROPIC_API_KEY, OPENAI_API_KEY, MISTRAL_API_KEY, STRANDS_MCP_TIMEOUT, and others. This mismatch means the skill's manifest understates the secret/credential access the code will need and could request at runtime.
Persistence & Privilege
The skill does not request always: true and does not declare system-wide config paths. It is user-invocable and allows autonomous model invocation (default), which is normal for skills. The provided scripts generate project files and can enable hot-reload of ./tools/ (Agent(load_tools_from_directory=True)), but they do not claim to modify other skills or global agent settings.
What to consider before installing
This SDK appears to be what it claims (a Strands agent SDK) but has a few important mismatches you should consider before installing: - Verify source and integrity: the manifest points at a GitHub homepage; confirm the upstream repo, release tag, and checksum for the Python packages (strands-agents, strands-agents-tools, strands-agents-builder) before pip installing. - Credentials are required but not declared: the skill will behave as if it needs AWS credentials (Bedrock/S3), ANTHROPIC_API_KEY, OPENAI_API_KEY, and other provider keys. Do not supply broad/long-lived AWS keys; prefer least-privilege or temporary credentials and explicit provider selection (pass model= to avoid implicit Bedrock usage). - Generated scaffolds include powerful tools by default: the example agent templates include tools to read/write arbitrary files and run shell commands (subprocess.run with shell=True). If you run generated agents, either remove or sandbox these tools (or restrict their allowed paths/commands) to prevent accidental data leakage or command execution. - Run in an isolated environment: test in a sandboxed VM/container without sensitive credentials mounted, or with limited-role AWS credentials, before using on a production machine. - Audit generated code: review the files produced by create-agent.py and any third-party packages it installs (strands-tools, strands-agents) for unexpected network endpoints or hidden behaviors. Pay attention to MCP examples that spawn external commands or connect to arbitrary endpoints. If you need a conservative setup: explicitly specify a local provider (Ollama) or a provider you control, remove the run_command/file_read/file_write tools from the default toolset, and only add provider credentials when necessary. If you want more assurance, request an upstream signed release or a reproducible package build before trusting it with secrets or broad system access.

Like a lobster shell, security has layers — review code before you run it.

Current versionv2.0.3
Download zip
agentsvk979417e6tjjj0my6h846t9kx580kdf0aivk979417e6tjjj0my6h846t9kx580kdf0awsvk979417e6tjjj0my6h846t9kx580kdf0latestvk975vs94w1bg7f4rhvm5czseas80j114pythonvk979417e6tjjj0my6h846t9kx580kdf0sdkvk979417e6tjjj0my6h846t9kx580kdf0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧬 Clawdis
Binspython3

SKILL.md

Strands Agents SDK

Build AI agents in Python using the Strands SDK (Apache-2.0, from AWS).

Validated against: strands-agents==1.23.0, strands-agents-tools==0.2.19

Prerequisites

# Install SDK + tools (via pipx for isolation — recommended)
pipx install strands-agents-builder  # includes strands-agents + strands-agents-tools + CLI

# Or install directly
pip install strands-agents strands-agents-tools

Core Concept: Bedrock Is the Default

Agent() with no model= argument defaults to Amazon Bedrock — specifically us.anthropic.claude-sonnet-4-20250514-v1:0 in us-west-2. This requires AWS credentials. To use a different provider, pass model= explicitly.

Default model constant: strands.models.bedrock.DEFAULT_BEDROCK_MODEL_ID

Quick Start — Local Agent (Ollama)

from strands import Agent
from strands.models.ollama import OllamaModel

# host is a required positional argument
model = OllamaModel("http://localhost:11434", model_id="qwen3:latest")
agent = Agent(model=model)
result = agent("What is the capital of France?")
print(result)

Note: Not all open-source models support tool-calling. Abliterated models often lose function-calling during the abliteration process. Test with a stock model (qwen3, llama3.x, mistral) first.

Quick Start — Bedrock (Default Provider)

from strands import Agent

# No model specified → BedrockModel (Claude Sonnet 4, us-west-2)
# Requires AWS credentials (~/.aws/credentials or env vars)
agent = Agent()
result = agent("Explain quantum computing")

# Explicit Bedrock model:
from strands.models import BedrockModel
model = BedrockModel(model_id="us.anthropic.claude-sonnet-4-20250514-v1:0")
agent = Agent(model=model)

Quick Start — Anthropic (Direct API)

from strands import Agent
from strands.models.anthropic import AnthropicModel

# max_tokens is Required[int] — must be provided
model = AnthropicModel(model_id="claude-sonnet-4-20250514", max_tokens=4096)
agent = Agent(model=model)
result = agent("Explain quantum computing")

Requires ANTHROPIC_API_KEY environment variable.

Quick Start — OpenAI

from strands import Agent
from strands.models.openai import OpenAIModel

model = OpenAIModel(model_id="gpt-4.1")
agent = Agent(model=model)

Requires OPENAI_API_KEY environment variable.

Creating Custom Tools

Use the @tool decorator. Type hints become the schema; the docstring becomes the description:

from strands import Agent, tool

@tool
def read_file(path: str) -> str:
    """Read contents of a file at the given path.

    Args:
        path: Filesystem path to read.
    """
    with open(path) as f:
        return f.read()

@tool
def write_file(path: str, content: str) -> str:
    """Write content to a file.

    Args:
        path: Filesystem path to write.
        content: Text content to write.
    """
    with open(path, 'w') as f:
        f.write(content)
    return f"Wrote {len(content)} bytes to {path}"

agent = Agent(model=model, tools=[read_file, write_file])
agent("Read /tmp/test.txt and summarize it")

ToolContext

Tools can access agent state via ToolContext:

from strands import tool
from strands.types.tools import ToolContext

@tool
def stateful_tool(query: str, tool_context: ToolContext) -> str:
    """A tool that accesses agent state.

    Args:
        query: Input query.
    """
    # Access shared agent state
    count = tool_context.state.get("call_count", 0) + 1
    tool_context.state["call_count"] = count
    return f"Call #{count}: {query}"

Built-in Tools (46 available)

strands-agents-tools provides pre-built tools:

from strands_tools import calculator, file_read, file_write, shell, http_request
agent = Agent(model=model, tools=[calculator, file_read, shell])

Full list: calculator, file_read, file_write, shell, http_request, editor, image_reader, python_repl, current_time, think, stop, sleep, environment, retrieve, search_video, chat_video, speak, generate_image, generate_image_stability, diagram, journal, memory, agent_core_memory, elasticsearch_memory, mongodb_memory, mem0_memory, rss, cron, batch, workflow, use_agent, use_llm, use_aws, use_computer, load_tool, handoff_to_user, slack, swarm, graph, a2a_client, mcp_client, exa, tavily, bright_data, nova_reels.

Hot reload: Agent(load_tools_from_directory=True) watches ./tools/ for changes.

MCP Integration

Connect to any Model Context Protocol server. MCPClient implements ToolProvider — pass it directly in the tools list:

from strands import Agent
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters

# MCPClient takes a callable that returns the transport
mcp = MCPClient(lambda: stdio_client(StdioServerParameters(
    command="uvx",
    args=["some-mcp-server@latest"]
)))

# Use as context manager — MCPClient is a ToolProvider
with mcp:
    agent = Agent(model=model, tools=[mcp])
    agent("Use the MCP tools to do something")

SSE transport:

from mcp.client.sse import sse_client
mcp = MCPClient(lambda: sse_client("http://localhost:8080/sse"))

Multi-Agent Patterns

Agents as Tools

Nest agents — inner agents become tools for the outer agent:

researcher = Agent(model=model, system_prompt="You are a research assistant.")
writer = Agent(model=model, system_prompt="You are a writer.")

orchestrator = Agent(
    model=model,
    tools=[researcher, writer],
    system_prompt="You coordinate research and writing tasks."
)
orchestrator("Research quantum computing and write a blog post")

Swarm Pattern

Self-organizing agent teams with shared context and autonomous handoff coordination:

from strands.multiagent.swarm import Swarm

# Agents need name + description for handoff identification
researcher = Agent(
    model=model,
    name="researcher",
    description="Finds and summarizes information"
)
writer = Agent(
    model=model,
    name="writer",
    description="Creates polished content"
)

swarm = Swarm(
    nodes=[researcher, writer],
    entry_point=researcher,    # optional — defaults to first agent
    max_handoffs=20,           # default
    max_iterations=20,         # default
    execution_timeout=900.0,   # 15 min default
    node_timeout=300.0         # 5 min per node default
)
result = swarm("Research AI agents, then hand off to writer for a blog post")

Swarm auto-injects a handoff_to_agent tool. Agents hand off by calling it with the target agent's name. Supports interrupt/resume, session persistence, and repetitive-handoff detection.

Graph Pattern (DAG)

Deterministic dependency-based execution via GraphBuilder:

from strands.multiagent.graph import GraphBuilder

builder = GraphBuilder()
research_node = builder.add_node(researcher, node_id="research")
writing_node = builder.add_node(writer, node_id="writing")
builder.add_edge("research", "writing")
builder.set_entry_point("research")

# Optional: conditional edges
# builder.add_edge("research", "writing",
#     condition=lambda state: "complete" in str(state.completed_nodes))

graph = builder.build()
result = graph("Write a blog post about AI agents")

Supports cycles (feedback loops) with builder.reset_on_revisit(True), execution timeouts, and nested graphs (Graph as a node in another Graph).

A2A Protocol (Agent-to-Agent)

Expose a Strands agent as an A2A-compatible server for inter-agent communication:

from strands.multiagent.a2a import A2AServer

server = A2AServer(
    agent=my_agent,
    host="127.0.0.1",
    port=9000,
    version="0.0.1"
)
server.start()  # runs uvicorn

Connect to A2A agents with the a2a_client tool from strands-agents-tools. A2A implements Google's Agent-to-Agent protocol for standardized cross-process/cross-network agent communication.

Session Persistence

Persist conversations across agent runs:

from strands.session.file_session_manager import FileSessionManager

session = FileSessionManager(session_file_path="./sessions/my_session.json")
agent = Agent(model=model, session_manager=session)

# Also available:
from strands.session.s3_session_manager import S3SessionManager
session = S3SessionManager(bucket_name="my-bucket", session_id="session-1")

Both Swarm and Graph support session managers for persisting multi-agent state.

Bidirectional Streaming (Experimental)

Real-time voice/text conversations with persistent audio streams:

from strands.experimental.bidi.agent import BidiAgent
from strands.experimental.bidi.models.nova_sonic import NovaSonicModel

# Supports: NovaSonicModel, GeminiLiveModel, OpenAIRealtimeModel
model = NovaSonicModel(region="us-east-1")
agent = BidiAgent(model=model, tools=[my_tool])

Supports interruption detection, concurrent tool execution, and continuous back-and-forth audio. Experimental — API subject to change.

System Prompts

agent = Agent(
    model=model,
    system_prompt="You are Hex, a sharp and witty AI assistant.",
    tools=[read_file, write_file]
)

Strands also supports list[SystemContentBlock] for structured system prompts with cache control.

Observability

Native OpenTelemetry tracing:

agent = Agent(
    model=model,
    trace_attributes={"project": "my-agent", "environment": "dev"}
)

Every tool call, model invocation, handoff, and lifecycle event is instrumentable.

Bedrock-Specific Features

  • Guardrails: guardrail_id + guardrail_version in BedrockModel config — content filtering, PII detection, input/output redaction
  • Cache points: System prompt and tool definition caching for cost optimization
  • Streaming: On by default, disable with streaming=False
  • Region: Defaults to us-west-2, override via region_name param or AWS_REGION env
  • Cross-region inference: Model IDs prefixed with us. use cross-region inference profiles

Scaffolding a New Agent

python3 {baseDir}/scripts/create-agent.py my-agent --provider ollama --model qwen3:latest
python3 {baseDir}/scripts/create-agent.py my-agent --provider anthropic
python3 {baseDir}/scripts/create-agent.py my-agent --provider bedrock
python3 {baseDir}/scripts/create-agent.py my-agent --provider openai --model gpt-4.1

Creates a ready-to-run agent directory with tools, config, and entry point.

Running an Agent

python3 {baseDir}/scripts/run-agent.py path/to/agent.py "Your prompt here"
python3 {baseDir}/scripts/run-agent.py path/to/agent.py --interactive

Model Providers Reference (11 total)

ProviderClassInitNotes
BedrockBedrockModelBedrockModel(model_id=...)Default, eagerly imported
OllamaOllamaModelOllamaModel("http://host:11434", model_id=...)host is positional
AnthropicAnthropicModelAnthropicModel(model_id=..., max_tokens=4096)max_tokens required
OpenAIOpenAIModelOpenAIModel(model_id=...)OPENAI_API_KEY
GeminiGeminiModelGeminiModel(model_id=...)api_key in client_args
MistralMistralModelMistralModel(model_id=...)Mistral API key
LiteLLMLiteLLMModelLiteLLMModel(model_id=...)Meta-provider (Cohere, Groq, etc.)
LlamaAPILlamaAPIModelLlamaAPIModel(model_id=...)Meta Llama API
llama.cppLlamaCppModelLlamaCppModel(...)Local server, OpenAI-compatible
SageMakerSageMakerAIModelSageMakerAIModel(...)Custom AWS endpoints
WriterWriterModelWriterModel(model_id=...)Writer platform

All non-Bedrock providers are lazy-loaded — dependencies imported only when referenced.

Import pattern: from strands.models.<provider> import <Class> (or from strands.models import <Class> for lazy-load).

Tips

  • Agent() without model= requires AWS credentials (Bedrock default)
  • AnthropicModel requires max_tokens — omitting it causes a runtime error
  • OllamaModel host is positional: OllamaModel("http://...", model_id="...")
  • Abliterated Ollama models often lose tool-calling support — use stock models for tool-using agents
  • Swarm agents need name= and description= for handoff routing
  • Agent(load_tools_from_directory=True) watches ./tools/ for hot-reloaded tool files
  • Use agent.tool.my_tool() to call tools directly without LLM routing
  • MCPClient is a ToolProvider — pass it directly in tools=[mcp], don't call list_tools_sync() manually when using with Agent
  • Session managers work with Agent, Swarm, and Graph
  • Pin your strands-agents version — the SDK is young and APIs evolve between releases

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…