Install
openclaw skills install clawditorAudit an OpenClaw agent workspace and generate standardized evaluation reports, scores, and patches. Use when asked to review memory quality, retrieval efficiency, productive output, reliability, or alignment by scanning memory/logs/configs/git/artifacts and writing eval/exec_summary.md, eval/scorecard.md, and eval/latest_report.json (with deltas if prior eval/history exists).
openclaw skills install clawditorAct as an OpenClaw Workspace Auditor and Agent Evaluation Harness. Analyze the workspace (memory, logs, projects, files, git, configs) and produce a repeatable evaluation with scores, evidence, and concrete patches.
Compute 5 categories (0–100) plus overall weighted score:
Overall = 0.30Memory + 0.15Retrieval + 0.30Productive + 0.15Quality + 0.10*Focus.
Write all outputs under eval/:
exec_summary.md
scorecard.md
latest_report.json
Create or propose:
memory/INDEX.md
memory/YYYY-MM-DD.md (append-only daily)
Use these helpers to keep audits consistent and cheap to run:
scripts/run_audit.py: run all helper scripts and write draft eval/ outputs.scripts/workspace_inventory.py: tree, file counts, sizes, largest files.scripts/memory_dupes.py: near-duplicate paragraph detection for memory/*.md.scripts/log_scan.py: scan logs for errors, timeouts, retries.scripts/git_stats.py: git head, diff stats, commit cadence.scripts/validate_report.py: validate eval/latest_report.json shape.Reference templates:
references/report_schema.md: output templates and JSON schema.