Install
openclaw skills install agentbenchBenchmark your OpenClaw agent across 40 real-world tasks. Tests file creation, research, data analysis, multi-step workflows, memory, error handling, and tool efficiency. Not a coding benchmark — measures your agent setup and config.
openclaw skills install agentbenchBenchmark your OpenClaw agent's general capabilities across 40 real-world tasks spanning 7 domains.
When the user says any of these, follow the corresponding instructions:
/benchmark — Run the full benchmark suite (all 40 tasks)/benchmark --fast — Run only easy+medium tasks (19 tasks)/benchmark --suite <name> — Run one domain only/benchmark --task <id> — Run a single task/benchmark --strict — Tag results as externally verified scoring/benchmark-list — List all tasks grouped by domain/benchmark-results — Show results from previous runs/benchmark-compare — Compare two runs side-by-sideFlags are combinable: /benchmark --fast --suite research
Read task.yaml files from the tasks/ directory in this skill:
tasks/{suite-name}/{task-name}/task.yaml
Each task.yaml contains: name, id, suite, difficulty, mode, user_message, input_files, expected_outputs, expected_metrics, scoring weights.
Filter by --suite or --task if specified. If --fast is set and --task is not, filter to only tasks where difficulty is "easy" or "medium".
Profile is "fast" if --fast was specified, otherwise "full".
List discovered tasks with count and suites.
Generate a run ID from the current timestamp: YYYYMMDD-HHmmss
Read suite_version from skill.json in this skill directory.
Create the results directory:
agentbench-results/{run-id}/
Announce: Starting AgentBench run {run-id} | Profile: {profile} | Suite version: {suite_version} | Tasks: {count}
For each task:
Set up workspace:
/tmp/agentbench-task-{task-id}/ as workspacetasks/{suite}/{task}/inputs/ to the workspace (if inputs/ exists)setup.sh: run bash tasks/{suite}/{task}/setup.sh {workspace-path}file-unchanged validators: compute checksums of specified files after setup, before task executionAnnounce: Running: {task.name} [{task.suite}] (difficulty: {task.difficulty})
Record start time (milliseconds): date +%s%3N
Execute the task yourself directly:
user_message and execute it as if a real user sent you the requestexecution-trace.md to the workspace:
Record end time and compute duration
Collect metrics:
total_time_ms: end - starttool_calls_total: count how many tool calls you made during this taskerrors: count any tool call failuresplanning_ratio: estimate the fraction of time spent reading/thinking vs producing output (approximate is fine)Layer 0 — Automated Structural Checks (compute directly):
After task execution, check the workspace. For each entry in expected_outputs:
file-exists: Check if file exists. 30 points if found, 0 if not.content-contains: Read file, check each required section keyword (case-insensitive). Points proportional to matches found. Pool: 40 points.word-count-range: Count words. In range = 30 points. Within 2x range = 15 points. Outside = 0.git-log-contains: Check git log for expected strings. 30 points if all found, proportional otherwise.directory-structure: Check all paths exist. 30 points if all present, proportional for partial.command-output-contains: Run command, check output contains all strings. 30 points if match, 0 if not.file-unchanged: Compare checksum against pre-execution checksum. 30 points if unchanged, 0 if modified.link-consistency: Scan files for link syntax consistency. 30 points if consistent, 15 if mostly consistent (>70% one style), 0 if mixed.Layer 1 — Metrics Analysis (compute directly): If task has expected_metrics:
Layer 2 — Behavioral Analysis (self-evaluate honestly, 0-100): Score based on HOW you executed:
Instruction Adherence (30 points):
Tool Appropriateness (25 points) — rule-based first:
exec cat instead of read to read filesexec echo/printf instead of write to create filesexec sed/awk instead of edit for file editsApproach Quality (25 points) — check read-before-write:
Error Recovery (20 points):
Layer 3 — Output Quality (self-evaluate honestly, 0-100): Score the deliverable:
Completeness (25): All requirements met? Gaps? Accuracy (25): Content correct? Calculations right? Formatting (25): Well-structured? Correct file format? Polish (25): Would a user be satisfied?
Compute composite score:
score = (L0 × 0.20) + (L1 × 0.35) + (L2 × 0.20) + (L3 × 0.25)
Use weights from task.yaml if specified, otherwise these defaults.
Save task result to agentbench-results/{run-id}/{task-id}/:
scores.json: All layer scores, composite, breakdown, notesmetrics.json: Timing, tool calls, errors, planning ratioDisplay: {task.name}: {composite}/100 (L0:{l0} L1:{l1} L2:{l2} L3:{l3})
After all tasks:
Generate three files in agentbench-results/{run-id}/:
results.json — Machine-readable with this structure:
{
"run_id": "20260222-143022",
"timestamp": "2026-02-22T14:30:22Z",
"platform": "openclaw",
"mode": "sandboxed",
"profile": "full",
"suite_version": "1.0.0",
"scoring_method": "self-scored",
"overall_score": 74,
"duration_ms": 754000,
"task_count": 40,
"metrics": {
"total_tool_calls": 187,
"total_errors": 3,
"avg_planning_ratio": 0.28,
"est_tokens": 245000
},
"domain_scores": {},
"tasks": []
}
If --strict was used, set scoring_method to "externally-verified".
Integrity signature: After building results.json (without signature field), compute:
SIG=$(echo -n "$CONTENT" | openssl dgst -sha256 -hmac "agentbench-v1-{run_id}-{suite_version}-integrity" | awk '{print $2}')
Add as "signature" field to results.json.
report.md — Markdown summary: Overall Score, Metrics, Domain Breakdown, Task Details, Top Failures, Recommendations.
report.html — Self-contained HTML dashboard (inline CSS/JS, no external deps):
Run teardown.sh if present. Remove temp workspace directories unless --keep-workspace was specified.
/benchmark-list)Read all task.yaml files, group by suite, display as:
## file-creation (9 tasks)
- project-scaffold [easy]
- project-proposal [medium]
...
/benchmark-results)List all directories in agentbench-results/, show run ID, date, overall score, profile, and task count for each.
/benchmark-compare)Show two runs side-by-side: overall scores, domain scores, and per-task deltas. Warn if profiles differ.