aws-agentcore-langgraph

v1.0.2

Deploy production LangGraph agents on AWS Bedrock AgentCore. Use for (1) multi-agent systems with orchestrator and specialist agent patterns, (2) building stateful agents with persistent cross-session memory, (3) connecting external tools via AgentCore Gateway (MCP, Lambda, APIs), (4) managing shared context across distributed agents, or (5) deploying complex agent ecosystems via CLI with production observability and scaling.

3· 1.6k·0 current·0 all-time
byVaskin Kissoyan@killerapp

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for killerapp/aws-agentcore-langgraph.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "aws-agentcore-langgraph" (killerapp/aws-agentcore-langgraph) from ClawHub.
Skill page: https://clawhub.ai/killerapp/aws-agentcore-langgraph
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aws-agentcore-langgraph

ClawHub CLI

Package manager switcher

npx clawhub@latest install aws-agentcore-langgraph
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description match the provided content (deploying LangGraph agents on AWS AgentCore). However the packaged scripts and instructions rely on the AWS CLI, jq, and pip-installed Python packages to create and manage cloud resources. The skill metadata does not declare required binaries or credentials even though deploying/inspecting AgentCore resources requires AWS credentials and CLI tooling.
!
Instruction Scope
SKILL.md gives step-by-step install and deployment commands (pip installs, 'agentcore' CLI usage, gateway deploy, memory APIs) that will create and manage AWS resources and auto-inject env vars. The instructions reference environment variables (e.g., BEDROCK_AGENTCORE_MEMORY_ID) and show examples accessing os.getenv, but requires.env is empty — the runtime instructions therefore implicitly rely on cloud credentials/config and on local tools not declared in the manifest. The instructions do not instruct collection or exfiltration of unrelated local data, but they do direct the agent/operator to run commands that will enumerate and modify AWS resources (list-agent-runtimes, list-memories, create gateways, etc.).
Install Mechanism
There is no formal install spec (instruction-only), which is lower risk. SKILL.md instructs pip installs for known packages (bedrock-agentcore, langgraph and related toolkits) — these are standard package installs from PyPI and not downloads from arbitrary URLs. The one ambiguous command is `uv tool install bedrock-agentcore-starter-toolkit` (unclear which 'uv' tool is referenced); that should be clarified before automatic execution.
!
Credentials
The skill declares no required environment variables or primary credential, yet the runtime examples and scripts clearly require AWS credentials (AWS_PROFILE or AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY), AWS_REGION, and expect the AWS CLI and jq to be available. It also references auto-injected variables (BEDROCK_AGENTCORE_MEMORY_ID, etc.) that will only exist after deployment. The mismatch between declared requirements and actual needed credentials/tools is disproportionate and meaningful.
Persistence & Privilege
always is false and the skill does not request permanent platform presence. The skill's files are instruction-and-script oriented and do not attempt to modify other skills or system-wide agent settings.
What to consider before installing
This skill is largely what it claims (an AWS AgentCore + LangGraph deployment guide), but proceed carefully: - Expect to need the AWS CLI, jq, Python, and valid AWS credentials (profile or ACCESS_KEY/SECRET) to run the examples and scripts — those are not declared in the skill metadata. The scripts will list/create/inspect AgentCore resources, so they require IAM permissions (bedrock-agentcore-control actions, logs access). Review and limit IAM permissions before use. - The SKILL.md runs pip installs for third-party packages (bedrock-agentcore, langgraph, and checkpoint packages). If you plan to run these locally, validate package names and sources (PyPI) and consider using a virtualenv. - The ambiguous command `uv tool install ...` should be clarified; don't run unclear commands without understanding the tool they invoke. - Scripts call AWS APIs and CloudWatch logs (they will read/list resources). If you run them, do so in an isolated/test AWS account or with a least-privilege role to avoid accidental resource creation or data exposure. - If you need to allow this skill to run autonomously, be extra cautious: autonomous runs combined with cloud access increases blast radius. Because the manifest omits required credentials, that omission is a red flag — ask the publisher to explicitly list required binaries and environment variables (AWS credentials, region, jq, aws CLI) and confirm the provenance of the referenced Python packages before installing or granting access.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fn3zjnavcttk8t685bcvcph80mhbf
1.6kdownloads
3stars
1versions
Updated 1mo ago
v1.0.2
MIT-0

AWS AgentCore + LangGraph

Multi-agent systems on AWS Bedrock AgentCore with LangGraph orchestration. Source: https://github.com/aws/bedrock-agentcore-starter-toolkit

Install

pip install bedrock-agentcore bedrock-agentcore-starter-toolkit langgraph
uv tool install bedrock-agentcore-starter-toolkit  # installs agentcore CLI

Quick Start

from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition  # routing + tool execution
from bedrock_agentcore.runtime import BedrockAgentCoreApp
from typing import Annotated
from typing_extensions import TypedDict

class State(TypedDict):
    messages: Annotated[list, add_messages]

builder = StateGraph(State)
builder.add_node("agent", agent_node)
builder.add_node("tools", ToolNode(tools))  # prebuilt tool executor
builder.add_conditional_edges("agent", tools_condition)  # routes to tools or END
builder.add_edge(START, "agent")
graph = builder.compile()

app = BedrockAgentCoreApp()  # Wraps as HTTP service on port 8080 (/invocations, /ping)
@app.entrypoint
def invoke(payload, context):
    result = graph.invoke({"messages": [("user", payload.get("prompt", ""))]})
    return {"result": result["messages"][-1].content}
app.run()

CLI Commands

CommandPurpose
agentcore configure -e agent.py --region us-east-1Setup
agentcore configure -e agent.py --region us-east-1 --name my_agent --non-interactiveScripted setup
agentcore launch --deployment-type containerDeploy (container mode)
agentcore launch --disable-memoryDeploy without memory subsystem
agentcore devHot-reload local dev server
agentcore invoke '{"prompt": "Hello"}'Test
agentcore destroyCleanup

Core Patterns

Multi-Agent Orchestration

  • Orchestrator delegates to specialists (customer service, e-commerce, healthcare, financial, etc.)
  • Specialists: inline functions or separate deployed agents; all share session_id for context

Memory (STM/LTM)

from bedrock_agentcore.memory import MemoryClient
memory = MemoryClient()
memory.create_event(session_id, actor_id, event_type, payload)  # Store
events = memory.list_events(session_id)  # Retrieve (returns list)
  • STM: Turn-by-turn within session | LTM: Facts/decisions across sessions/agents
  • ~10s eventual consistency after writes

Gateway Tools

python -m bedrock_agentcore.gateway.deploy --stack-name my-agents --region us-east-1
from bedrock_agentcore.gateway import GatewayToolClient
gateway = GatewayToolClient()
result = gateway.call("tool_name", param1=value1, param2=value2)
  • Transport: Fallback Mock (local), Local MCP servers, Production Gateway (Lambda/REST/MCP)
  • Auto-configures BEDROCK_AGENTCORE_GATEWAY_URL after deploy

Decision Tree

Multiple agents coordinating? → Orchestrator + specialists pattern
Persistent cross-session memory? → AgentCore Memory (not LangGraph checkpoints)
External APIs/Lambda? → AgentCore Gateway
Single agent, simple? → Quick Start above
Complex multi-step logic? → StateGraph + tools_condition + ToolNode

Key Concepts

  • AgentCore Runtime: HTTP service on port 8080 (handles /invocations, /ping)
  • AgentCore Memory: Managed cross-session/cross-agent memory
  • LangGraph Routing: tools_condition for agent→tool routing, ToolNode for execution
  • AgentCore Gateway: Transforms APIs/Lambda into MCP tools with auth

Naming Rules

  • Start with letter, only letters/numbers/underscores, 1-48 chars: my_agent not my-agent

Troubleshooting

IssueFix
on-demand throughput isn't supportedUse us.anthropic.claude-* inference profiles
Model use case details not submittedFill Anthropic form in Bedrock Console
Invalid agent nameUse underscores not hyphens
Memory empty after writeWait ~10s (eventual consistency)
Container not reading .envSet ENV in Dockerfile, not .env
Memory not working after deployCheck logs for "Memory enabled/disabled"
list_events returns emptyCheck actor_id/session_id match; event['payload'] is a list
Gateway "Unknown tool"Lambda must strip ___ prefix from bedrockAgentCoreToolName
Platform mismatch warningNormal - CodeBuild handles ARM64 cross-platform builds

References

Comments

Loading comments...