Install
openclaw skills install network-aiLocal Python orchestration skill: multi-agent workflows via shared blackboard file, permission gating, token budget scripts, and persistent project context. All bundled scripts run locally with zero network calls and zero third-party dependencies.
openclaw skills install network-aiScope: The bundled Python scripts (
scripts/*.py) make no network calls, use only the Python standard library, and have zero third-party dependencies. Tokens are UUID-based (grant_{uuid4().hex}) stored indata/active_grants.json. Audit logging is plain JSONL (data/audit_log.jsonl).
Advisory tokens notice: Grant tokens issued by
check_permission.pyare advisory scoring outputs only — the caller-supplied--agentidentity is not cryptographically verified. Downstream systems must not treat these tokens as authenticated credentials without adding a separate identity-verification step or human approval gate, especially for PAYMENTS, DATABASE, and FILE_EXPORT resources.
Data-flow notice (host platform — not this skill): This skill does NOT implement, invoke, or control
sessions_sendor any inter-agent messaging. All bundled Python scripts are local-only tools (budget guard, blackboard, permission scorer, context manager). If your platform has asessions_sendbuilt-in, whether and how it is used is entirely the host platform’s responsibility and is outside this skill’s scope. If you need to prevent external network calls, disable or reroute delegation in your platform settings before installing this skill.
Context file integrity: The
context_manager.py injectcommand now validatesdata/project-context.jsonfor injection patterns and oversized fields before printing the context block. Review any warnings printed to stderr before passing the output to an agent system prompt.
PII / sensitive-data warning: The
justificationfield in permission requests and the audit log (data/audit_log.jsonl) store free-text strings provided by agents. Do not include PII, secrets, or credentials in justification text. Consider restricting file permissions ondata/or running this skill in an isolated workspace.
No pip install required. All 6 scripts use Python standard library only — zero third-party packages.
Note on
requirements.txt: The file exists for documentation purposes only — it lists the stdlib modules used and has no required packages. All listed deps are commented out as optional. You do not need to runpip install -r requirements.txt.
# Prerequisite: python3 (any version ≥ 3.8)
python3 --version
# That's it. Run any script directly:
python3 scripts/blackboard.py list
python3 scripts/swarm_guard.py budget-init --task-id "task_001" --budget 10000
# Optional: for cross-platform file locking on Windows production hosts
pip install filelock # only needed if you see locking issues on Windows
The data/ directory is created automatically on first run. No configuration files, environment variables, or credentials are required.
Multi-environment support (v5.4.0): All five Python scripts now read the
NETWORK_AI_ENVenvironment variable at startup and accept a--env <name>CLI argument. When set, all data paths are routed todata/<env>/instead of the rootdata/directory. Use this to isolate dev, staging, and production state.# Run against the dev environment NETWORK_AI_ENV=dev python3 scripts/blackboard.py list python3 scripts/check_permission.py --active-grants --env dev
Multi-agent coordination system for complex workflows requiring task delegation, parallel execution, and permission-controlled access to sensitive APIs.
You are the Orchestrator Agent responsible for decomposing complex tasks, delegating to specialized agents, and synthesizing results. Follow this protocol:
When you receive a complex request, decompose it into exactly 3 sub-tasks:
┌─────────────────────────────────────────────────────────────────┐
│ COMPLEX USER REQUEST │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────┼─────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ SUB-TASK 1 │ │ SUB-TASK 2 │ │ SUB-TASK 3 │
│ data_analyst │ │ risk_assessor │ │strategy_advisor│
│ (DATA) │ │ (VERIFY) │ │ (RECOMMEND) │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└─────────────────────┼─────────────────────┘
▼
┌───────────────┐
│ SYNTHESIZE │
│ orchestrator │
└───────────────┘
Decomposition Template:
TASK DECOMPOSITION for: "{user_request}"
Sub-Task 1 (DATA): [data_analyst]
- Objective: Extract/process raw data
- Output: Structured JSON with metrics
Sub-Task 2 (VERIFY): [risk_assessor]
- Objective: Validate data quality & compliance
- Output: Validation report with confidence score
Sub-Task 3 (RECOMMEND): [strategy_advisor]
- Objective: Generate actionable insights
- Output: Recommendations with rationale
Run the budget interceptor before any task delegation:
# Run this before delegating to any sub-agent
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
--task-id "task_001" \
--from orchestrator \
--to data_analyst \
--message "Analyze Q4 revenue data"
Decision Logic:
IF result.allowed == true:
→ Budget check passed — proceed with the delegated task
→ Note tokens_spent and remaining_budget
ELSE:
→ STOP — budget exceeded or handoff limit reached
→ Report blocked reason to user
→ Consider: reduce scope or abort task
Before returning final results to the user:
# Step 1: Check all sub-task results on blackboard
python {baseDir}/scripts/blackboard.py read "task:001:data_analyst"
python {baseDir}/scripts/blackboard.py read "task:001:risk_assessor"
python {baseDir}/scripts/blackboard.py read "task:001:strategy_advisor"
# Step 2: Validate each result
python {baseDir}/scripts/swarm_guard.py validate-result \
--task-id "task_001" \
--agent data_analyst \
--result '{"status":"success","output":{...},"confidence":0.85}'
# Step 3: Supervisor review (checks all issues)
python {baseDir}/scripts/swarm_guard.py supervisor-review --task-id "task_001"
# Step 4: Only if APPROVED, commit final state
python {baseDir}/scripts/blackboard.py write "task:001:final" \
'{"status":"SUCCESS","output":{...}}'
Verdict Handling:
| Verdict | Action |
|---|---|
APPROVED | Commit and return results to user |
WARNING | Review issues, fix if possible, then commit |
BLOCKED | Do NOT return results. Report failure. |
Every agent in the swarm operates with three memory layers, each with a different scope and lifetime:
| Layer | Name | Lifetime | Managed by |
|---|---|---|---|
| 1 | Agent context | Ephemeral — current task only | Platform (per-session) |
| 2 | Blackboard | TTL-scoped — shared across agents | scripts/blackboard.py |
| 3 | Project context | Persistent — survives all sessions | scripts/context_manager.py |
Each agent's own context window: the current task instructions, conversation history, and immediate working memory. Managed automatically by the OpenClaw/LLM platform. Nothing to configure.
A shared markdown file (swarm-blackboard.md) for real-time cross-agent coordination: task results, grant tokens, status flags, and TTL-scoped cache entries. Agents read and write via scripts/blackboard.py. Entries expire automatically.
A JSON file (data/project-context.json) that holds information every agent should know, regardless of what session or task is running:
python {baseDir}/scripts/context_manager.py init \
--name "MyProject" \
--description "Multi-agent workflow automation" \
--version "1.0.0"
python {baseDir}/scripts/context_manager.py inject
Copy the output block to the top of your agent's system prompt. Every agent that receives this block shares the same long-term project awareness.
python {baseDir}/scripts/context_manager.py update \
--section decisions \
--add '{"decision": "Use atomic blackboard commits", "rationale": "Prevent race conditions in parallel agents"}'
# Mark a milestone complete
python {baseDir}/scripts/context_manager.py update \
--section milestones --complete "Ship v2.0"
# Add a planned milestone
python {baseDir}/scripts/context_manager.py update \
--section milestones --add '{"planned": "Integrate vector memory"}'
python {baseDir}/scripts/context_manager.py update \
--section stack \
--set '{"language": "Python", "runtime": "Python 3.11", "framework": "SwarmOrchestrator"}'
python {baseDir}/scripts/context_manager.py update \
--section banned \
--add "Direct database writes from agent scripts (use permission gating)"
Always initialize a budget before any multi-agent task:
python {baseDir}/scripts/swarm_guard.py budget-init \
--task-id "task_001" \
--budget 10000 \
--description "Q4 Financial Analysis"
Always run the budget guard before delegating any task:
# 1. Check budget (this skill's Python script)
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
--task-id "task_001" --from orchestrator --to data_analyst \
--message "Analyze Q4 revenue data"
# 2. If result.allowed == true, proceed with delegation via your platform's built-in tools.
# If result.allowed == false, stop — budget exceeded or handoff limit reached.
Before accessing SAP or Financial APIs, evaluate the request:
# Run the permission checker script
python {baseDir}/scripts/check_permission.py \
--agent "data_analyst" \
--resource "DATABASE" \
--justification "Need Q4 invoice data for quarterly report" \
--scope "read:invoices"
The script will output a grant token if approved, or denial reason if rejected.
Read/write coordination state:
# Write to blackboard
python {baseDir}/scripts/blackboard.py write "task:q4_analysis" '{"status": "in_progress", "agent": "data_analyst"}'
# Read from blackboard
python {baseDir}/scripts/blackboard.py read "task:q4_analysis"
# List all entries
python {baseDir}/scripts/blackboard.py list
When delegating tasks between agents, always run the budget guard first.
# Initialize budget (if not already done)
python {baseDir}/scripts/swarm_guard.py budget-init --task-id "task_001" --budget 10000
# Check current status
python {baseDir}/scripts/swarm_guard.py budget-check --task-id "task_001"
Common agent types:
| Agent | Specialty |
|---|---|
data_analyst | Data processing, SQL, analytics |
strategy_advisor | Business strategy, recommendations |
risk_assessor | Risk analysis, compliance checks |
orchestrator | Coordination, task decomposition |
# Check budget AND handoff limits before delegating
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
--task-id "task_001" \
--from orchestrator \
--to data_analyst \
--message "Analyze Q4 data" \
--artifact # Include if expecting output
If ALLOWED: Proceed with delegation via your platform's own tools If BLOCKED: Stop — budget exceeded or handoff limit reached; do not delegate
Include these fields in your delegation:
After delegation completes, read results from the blackboard:
python {baseDir}/scripts/blackboard.py read "task:001:data_analyst"
Tokens are audit scoring outputs only. Grant tokens from
check_permission.pyare NOT authenticated credentials and must NOT be used as real access control. They are advisory hints based on a local scoring model. Require a separate authenticated identity and explicit human approval before accessing PAYMENTS, DATABASE, or FILE_EXPORT resources.
Always score permission before accessing:
DATABASE — Internal database / data store (abstract label — no external credentials)PAYMENTS — Financial/payment data services (abstract label — requires --confirm-high-risk)EMAIL — Email sending capability (abstract label)FILE_EXPORT — Exporting data to local files (abstract label — requires --confirm-high-risk)Note: These are abstract local resource type names used by
check_permission.py. No external API credentials are required or used — all evaluation runs locally.
| Factor | Weight | Criteria |
|---|---|---|
| Justification | 40% | Must explain specific task need |
| Trust Level | 30% | Agent's established trust score |
| Risk Assessment | 30% | Resource sensitivity + scope breadth |
# Request permission
python {baseDir}/scripts/check_permission.py \
--agent "your_agent_id" \
--resource "PAYMENTS" \
--justification "Generating quarterly financial summary for board presentation" \
--scope "read:revenue,read:expenses"
# Output if approved:
# ✅ GRANTED
# Token: grant_a1b2c3d4e5f6
# Expires: 2026-02-04T15:30:00Z
# Restrictions: read_only, no_pii_fields, audit_required
# Output if denied:
# ❌ DENIED
# Reason: Justification is insufficient. Please provide specific task context.
| Resource | Default Restrictions |
|---|---|
| DATABASE | read_only, max_records:100 |
| PAYMENTS | read_only, no_pii_fields, audit_required |
rate_limit:10_per_minute | |
| FILE_EXPORT | anonymize_pii, local_only |
The blackboard (swarm-blackboard.md) is a markdown file for agent coordination:
# Swarm Blackboard
Last Updated: 2026-02-04T10:30:00Z
## Knowledge Cache
### task:q4_analysis
{"status": "completed", "result": {...}, "agent": "data_analyst"}
### cache:revenue_summary
{"q4_total": 1250000, "growth": 0.15}
# Write with TTL (expires after 1 hour)
python {baseDir}/scripts/blackboard.py write "cache:temp_data" '{"value": 123}' --ttl 3600
# Read (returns null if expired)
python {baseDir}/scripts/blackboard.py read "cache:temp_data"
# Delete
python {baseDir}/scripts/blackboard.py delete "cache:temp_data"
# Get full snapshot
python {baseDir}/scripts/blackboard.py snapshot
For tasks requiring multiple agent perspectives:
Combine all agent outputs into unified result.
Ask data_analyst AND strategy_advisor to both analyze the dataset.
Merge their insights into a comprehensive report.
Use when you need consensus - pick the result with highest confidence.
Use for redundancy - take first successful result.
Sequential processing - output of one feeds into next.
TypeScript engine (v4.15.0): These strategies map directly to the
FanOutFanInmodule (lib/fan-out.ts) which providesmerge,vote,firstSuccess, andconsensusfan-in strategies with concurrency control. For multi-phase workflows with approval gates, seePhasePipeline(lib/phase-pipeline.ts). For result scoring and threshold filtering, seeConfidenceFilter(lib/confidence-filter.ts). Matcher-based hooks (lib/adapter-hooks.ts) can target specific agents or tools via glob patterns. For sandboxed agent execution, seeAgentRuntime(lib/agent-runtime.ts). For large-scale agent coordination, seeStrategyAgent(lib/strategy-agent.ts).
# For each delegation below, first run the budget guard:
# python {baseDir}/scripts/swarm_guard.py intercept-handoff --task-id "task_001" --from orchestrator --to <agent> --message "<task>"
# If result.allowed == true, delegate via your platform's own tools.
1. Delegate to data_analyst: "Extract key metrics from Q4 data"
2. Delegate to risk_assessor: "Identify compliance risks in Q4 data"
3. Delegate to strategy_advisor: "Recommend actions based on Q4 trends"
4. Wait for all results and read them from the blackboard
5. Synthesize: Combine metrics + risks + recommendations into executive summary
python {baseDir}/scripts/validate_token.py TOKEN to verify grant tokens before useEvery sensitive action MUST be logged to data/audit_log.jsonl to maintain compliance and enable forensic analysis.
Privacy note: Audit log entries contain agent-provided free-text fields (justifications, descriptions). These are stored locally in
data/audit_log.jsonland never transmitted over the network by this skill. However, do not put PII, passwords, or API keys in justification strings — they persist on disk. Consider periodic log rotation and restricting OS file permissions on thedata/directory.
The scripts automatically log these events:
permission_granted - When access is approvedpermission_denied - When access is rejectedpermission_revoked - When a token is manually revokedttl_cleanup - When expired tokens are purgedresult_validated / result_rejected - Swarm Guard validations{
"timestamp": "2026-02-04T10:30:00+00:00",
"action": "permission_granted",
"details": {
"agent_id": "data_analyst",
"resource_type": "DATABASE",
"justification": "Q4 revenue analysis",
"token": "grant_abc123...",
"restrictions": ["read_only", "max_records:100"]
}
}
# View recent entries (last 10)
tail -10 {baseDir}/data/audit_log.jsonl
# Search for specific agent
grep "data_analyst" {baseDir}/data/audit_log.jsonl
# Count actions by type
cat {baseDir}/data/audit_log.jsonl | jq -r '.action' | sort | uniq -c
If you perform a sensitive action manually, log it:
import json
from datetime import datetime, timezone
from pathlib import Path
audit_file = Path("{baseDir}/data/audit_log.jsonl")
entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"action": "manual_data_access",
"details": {
"agent": "orchestrator",
"description": "Direct database query for debugging",
"justification": "Investigating data sync issue #1234"
}
}
with open(audit_file, "a") as f:
f.write(json.dumps(entry) + "\n")
Expired permission tokens are automatically tracked. Run periodic cleanup:
# Validate a grant token
python {baseDir}/scripts/validate_token.py grant_a1b2c3d4e5f6
# List expired tokens (without removing)
python {baseDir}/scripts/revoke_token.py --list-expired
# Remove all expired tokens
python {baseDir}/scripts/revoke_token.py --cleanup
# Output:
# 🧹 TTL Cleanup Complete
# Removed: 3 expired token(s)
# Remaining active grants: 2
Best Practice: Run --cleanup at the start of each multi-agent task to ensure a clean permission state.
Two critical issues can derail multi-agent swarms:
Problem: Agents waste tokens "talking about" work instead of doing it.
Prevention:
# Before each handoff, check your budget:
python {baseDir}/scripts/swarm_guard.py check-handoff --task-id "task_001"
# Output:
# 🟢 Task: task_001
# Handoffs: 1/3
# Remaining: 2
# Action Ratio: 100%
Rules enforced:
# Record a handoff (with tax checking):
python {baseDir}/scripts/swarm_guard.py record-handoff \
--task-id "task_001" \
--from orchestrator \
--to data_analyst \
--message "Analyze sales data, output JSON summary" \
--artifact # Include if this handoff produces output
Problem: One agent fails silently, others keep working on bad data.
Prevention - Heartbeats:
# Agents must send heartbeats while working:
python {baseDir}/scripts/swarm_guard.py heartbeat --agent data_analyst --task-id "task_001"
# Check if an agent is healthy:
python {baseDir}/scripts/swarm_guard.py health-check --agent data_analyst
# Output if healthy:
# 💚 Agent 'data_analyst' is HEALTHY
# Last seen: 15s ago
# Output if failed:
# 💔 Agent 'data_analyst' is UNHEALTHY
# Reason: STALE_HEARTBEAT
# → Do NOT use any pending results from this agent.
Prevention - Result Validation:
# Before using another agent's result, validate it:
python {baseDir}/scripts/swarm_guard.py validate-result \
--task-id "task_001" \
--agent data_analyst \
--result '{"status": "success", "output": {"revenue": 125000}, "confidence": 0.85}'
# Output:
# ✅ RESULT VALID
# → APPROVED - Result can be used by other agents
Required result fields: status, output, confidence
Before finalizing any task, run supervisor review:
python {baseDir}/scripts/swarm_guard.py supervisor-review --task-id "task_001"
# Output:
# ✅ SUPERVISOR VERDICT: APPROVED
# Task: task_001
# Age: 1.5 minutes
# Handoffs: 2
# Artifacts: 2
Verdicts:
APPROVED - Task healthy, results usableWARNING - Issues detected, review recommendedBLOCKED - Critical failures, do NOT use resultssessions_list (OpenClaw platform built-in) to see available sessionsThis skill is part of the larger Network-AI project. See the repository for full documentation on the permission system, blackboard schema, and trust-level calculations.