Install
openclaw skills install specclawSpec-driven development framework for OpenClaw. Propose features, generate specs, spawn coding agents, validate implementations.
openclaw skills install specclawSpecClaw brings structured, spec-driven development to OpenClaw agents. It manages the full lifecycle: propose → plan → build → verify → archive.
When initialized (.specclaw/ exists in project root):
.specclaw/
├── config.yaml # Project configuration
├── STATUS.md # Project dashboard (auto-generated)
├── patterns.md # Recurring pattern registry (cross-change)
└── changes/
├── <change-name>/
│ ├── proposal.md # Problem + solution + scope
│ ├── spec.md # Requirements + acceptance criteria
│ ├── design.md # Technical approach + file map
│ ├── tasks.md # Ordered tasks with status markers
│ ├── status.md # Progress tracking
│ ├── errors.md # Build error journal (auto-generated on failures)
│ ├── learnings.md # Build learnings (spec gaps, patterns, insights)
│ └── verify-report.md # Verification results (auto-generated)
└── archive/ # Completed changes
The user triggers commands conversationally. Recognize these patterns:
specclaw initTrigger: "specclaw init", "initialize specclaw", "set up spec-driven development"
.specclaw/ directory structureconfig.yaml from template (see templates/config.yaml)STATUS.md.specclaw/ tracking to gitspecclaw propose "<idea>"Trigger: "specclaw propose", "propose a change", "new feature proposal"
.specclaw/changes/<slugified-name>/proposal.md from templateSTATUS.mdgithub.sync is true): Run bash skill/scripts/gh-sync.sh create .specclaw <change> to create a GitHub Issue for the proposal. (gh-sync.sh create requires proposal.md — validation is enforced by validate-change.sh.)specclaw plan <change>Trigger: "specclaw plan", "plan the feature", "generate spec for"
bash skill/scripts/validate-change.sh .specclaw <change> plan. If it fails, report missing prerequisites and stop.spec.md — functional requirements, acceptance criteria, edge casesdesign.md — technical approach, architecture, file changes maptasks.md — ordered implementation tasks with dependenciesbash skill/scripts/gh-sync.sh update .specclaw <change> to add the task checklist to the GitHub Issue.specclaw build <change>Trigger: "specclaw build", "implement the feature", "start building"
This is where OpenClaw shines. Follow this execution flow exactly:
Run bash skill/scripts/validate-change.sh .specclaw <change> build. If it fails, report missing prerequisites and stop.
Run the setup script to parse config, create a git branch, and get build configuration:
bash skill/scripts/build.sh setup .specclaw <change_name>
This returns JSON config including parallel_tasks, models.coding, git.strategy, and notifications.channel. Capture this output — you'll need parallel_tasks and model values throughout the build.
Worktree strategy: When git.strategy is "worktree-per-change", setup creates an isolated worktree at .specclaw/worktrees/<change>/. The worktree_path from the config JSON should be used as the cwd parameter when spawning coding agents via sessions_spawn, ensuring each change's agents work in complete isolation.
Parallel changes: With worktree-per-change strategy, multiple changes can be built simultaneously since each has its own worktree. No branch switching required.
Send a build started notification:
🦞 **Build Started**
**Change:** <change_name>
**Branch:** specclaw/<change_name>
**Tasks:** <total_count> across <wave_count> waves
Get all actionable tasks:
bash skill/scripts/parse-tasks.sh --status pending .specclaw/changes/<change>/tasks.md
This outputs JSON: [{"id": "T1", "title": "...", "wave": 1, "depends": [], "files": [...], "estimate": "small"}, ...]
For retries (re-running build on a change with prior failures):
bash skill/scripts/parse-tasks.sh --status failed .specclaw/changes/<change>/tasks.md
Reset failed tasks to pending before re-executing:
bash skill/scripts/update-task-status.sh .specclaw/changes/<change>/tasks.md <TASK_ID> pending
Then re-parse with --status pending and continue from the appropriate wave.
Execute tasks wave-by-wave. For each wave number (1, 2, 3...):
a. Filter tasks for this wave:
bash skill/scripts/parse-tasks.sh --wave N --status pending .specclaw/changes/<change>/tasks.md
If no tasks returned for this wave, the build is complete — skip to Step 4.
Skip waves with blocked tasks: If a task's dependency failed in a prior wave, skip it and mark it failed:
bash skill/scripts/update-task-status.sh .specclaw/changes/<change>/tasks.md <TASK_ID> failed
b. For each task in the wave (up to parallel_tasks from config):
Mark in-progress:
bash skill/scripts/update-task-status.sh .specclaw/changes/<change>/tasks.md <TASK_ID> in_progress
Build context payload:
bash skill/scripts/build-context.sh .specclaw <change> <TASK_ID>
This outputs a complete context string containing: spec sections, design sections, task details, relevant source file contents, and constraints. Use this output directly as the agent's task.
Spawn coding agent:
sessions_spawn(
task: <output from build-context.sh>,
label: "specclaw-<change>-<task_id>",
mode: "run",
model: <models.coding from config>
)
c. Yield and wait:
After spawning all tasks in the wave batch, call sessions_yield to wait for agent completions. Results auto-announce back to you.
d. Process completed agents:
For each agent that succeeded:
Mark complete:
bash skill/scripts/update-task-status.sh .specclaw/changes/<change>/tasks.md <TASK_ID> complete
If this task previously failed (was [!] before): Run bash skill/scripts/log-error.sh .specclaw <change> --resolve <task_id>
Git commit the changes:
bash skill/scripts/build.sh commit .specclaw <change> <TASK_ID> "<task_title>" <files...>
Send a task complete notification:
✅ **Task Complete:** <TASK_ID> — <task_title>
**Change:** <change_name> | **Wave:** <N>/<total_waves>
e. Process failed agents:
For each agent that failed:
Mark failed:
bash skill/scripts/update-task-status.sh .specclaw/changes/<change>/tasks.md <TASK_ID> failed
Log error: Run bash skill/scripts/log-error.sh .specclaw <change> <task_id> <wave> <agent_label> "<failure summary>" — pipe agent error output if available
Log the error in status.md with the failure reason
Send a task failed notification:
❌ **Task Failed:** <TASK_ID> — <task_title>
**Change:** <change_name> | **Wave:** <N>/<total_waves>
**Error:** <brief failure reason>
Mark all dependent tasks in later waves as skipped/failed — they cannot proceed
GitHub sync (if enabled): Run bash skill/scripts/gh-sync.sh comment .specclaw <change> "❌ Task <task_id> failed: <summary>" to log the error on the issue.
f. GitHub sync (if enabled): Run bash skill/scripts/gh-sync.sh update .specclaw <change> to update task checkboxes.
g. Repeat for the next wave number until no pending tasks remain.
Run the finalize script to execute tests and merge the branch:
bash skill/scripts/build.sh finalize .specclaw <change_name>
This runs the configured test_command (if any) and merges the branch per git.strategy.
If automation.post_build_review is true in config, run an automated review before updating the dashboard:
a. Scope deviation check:
Compare files actually changed against files declared in tasks:
# Get files changed since pre-build commit (branch point)
git diff --name-only main...HEAD
Cross-reference with files listed in each task in tasks.md. Flag any files changed but not declared in any task's Files: field.
b. Review prompt:
Evaluate the build and auto-log findings (~150 words max):
🦞 Post-Build Review — <change-name>
Results: X/Y tasks passed, Z failed
Evaluate:
1. Were any spec requirements ambiguous or incomplete?
2. Did the design need adjustment during implementation?
3. Were any files modified outside declared task scope?
4. Did any agents struggle with context or instructions?
5. Any reusable patterns discovered?
For each finding, log with:
bash skill/scripts/log-learning.sh .specclaw <change> <category> <priority> "<detail>" "<action>"
c. Auto-log scope deviations:
For any files changed outside declared task scope, automatically log as design_gap:
bash skill/scripts/log-learning.sh .specclaw <change> design_gap medium "File <path> modified but not declared in any task" "Review task file declarations for completeness"
d. Pattern scan: Run bash skill/scripts/detect-patterns.sh .specclaw scan <change> to check for recurring patterns across changes.
e. If any patterns have recurrence >= 3, alert the user: "⚠️ Pattern PAT-XXX has N occurrences — consider promoting its prevention rule to agent context."
Regenerate the project status dashboard:
bash skill/scripts/update-status.sh .specclaw
Send the build summary via the message tool to the configured notification channel:
🦞 **Build Complete**
**Change:** <change_name>
**Status:** <succeeded|partial|failed>
**Tasks:** <completed>/<total> complete, <failed> failed, <skipped> skipped
**Branch:** specclaw/<change_name> → merged to <target_branch>
**Duration:** <elapsed time>
If any tasks failed, include a remediation section:
⚠️ **Failed Tasks:**
- <TASK_ID>: <brief error> — re-run with `specclaw build <change>` to retry
When specclaw build is called on a change that has failed tasks:
parse-tasks.sh --status failedupdate-task-status.sh ... pendingbuild-context.shbuild-context.sh. No stale context from prior tasks. This is critical for quality.parallel_tasks limit.specclaw learn <change> "<insight>"Trigger: "specclaw learn", "log a learning", "what did we learn", "capture insight"
Capture build learnings — spec gaps, design misses, and patterns discovered during implementation.
Log a learning:
bash skill/scripts/log-learning.sh .specclaw <change> <category> <priority> "<detail>" ["<action>"]
Categories: spec_gap | design_gap | pattern | best_practice | agent_issue
Priorities: low | medium | high
List learnings for a change:
bash skill/scripts/log-learning.sh .specclaw <change> --list
Promote a learning (mark for elevation to agent prompts/SKILL.md):
bash skill/scripts/log-learning.sh .specclaw <change> --promote <id>
When to log:
Learnings are stored in .specclaw/changes/<change>/learnings.md and feed into the pattern detection system for cross-change analysis.
specclaw patternsTrigger: "specclaw patterns", "check patterns", "recurring issues", "what keeps happening"
Track recurring patterns across changes — errors and learnings that repeat become prevention rules.
Scan a change for patterns:
bash skill/scripts/detect-patterns.sh .specclaw scan <change>
Reads errors.md and learnings.md, matches against existing patterns, creates new or increments existing.
List all patterns:
bash skill/scripts/detect-patterns.sh .specclaw list [--min-recurrence N]
Promote a pattern (mark for elevation to agent prompts):
bash skill/scripts/detect-patterns.sh .specclaw promote <pat-id>
Auto-promotion: Patterns with 3+ occurrences are flagged ⚠️ — their prevention rules should be added to agent context templates or SKILL.md build instructions.
Pattern registry lives at .specclaw/patterns.md (global, not per-change).
specclaw verify <change>Trigger: "specclaw verify", "validate implementation", "check against spec"
Validate that the implementation satisfies the spec's acceptance criteria.
Run bash skill/scripts/validate-change.sh .specclaw <change> verify. If it fails (tasks not all complete), report and stop.
Run bash skill/scripts/verify.sh collect .specclaw <change> to gather:
Run bash skill/scripts/verify-context.sh .specclaw <change> to construct the verification agent's context payload from the evidence + Verify Agent prompt template.
Spawn a verification agent:
sessions_spawn(
task: <verify context payload>,
model: <config.yaml models.review>, # default: anthropic/claude-sonnet-4-5
mode: "run",
label: "specclaw-verify-<change>"
)
Wait for completion via sessions_yield.
Save the agent's output as .specclaw/changes/<change>/verify-report.md.
Run bash skill/scripts/verify.sh update-status .specclaw <change> <verdict> where verdict is PASS, FAIL, or PARTIAL (extracted from the report).
Update status.md and run bash skill/scripts/update-status.sh .specclaw to refresh the dashboard.
If github.sync is true, post verification summary as a comment:
bash skill/scripts/gh-sync.sh comment .specclaw <change> "<verdict summary>"
Send verification results via configured notification channel.
When automation.auto_verify: true in config.yaml, the build flow automatically triggers verification after a successful build (all tasks complete).
If verdict is FAIL or PARTIAL:
specclaw statusTrigger: "specclaw status", "project status", "what's the progress"
For a specific change: bash skill/scripts/validate-change.sh .specclaw <change> status
.specclaw/changes/STATUS.mdspecclaw archive <change>Trigger: "specclaw archive", "mark as done", "archive the change"
bash skill/scripts/validate-change.sh .specclaw <change> archive. If it fails, report and stop..specclaw/changes/archive/YYYY-MM-DD-<change-name>/STATUS.mdbash skill/scripts/gh-sync.sh close .specclaw <change> to close the issue.specclaw autoTrigger: "specclaw auto", "autonomous mode", "auto-build"
STATUS.md for next actionable itemconfig.yaml limits (max_tasks_per_run)## Tasks
### Wave 1 (no dependencies)
- [ ] `T1` — Create theme context provider
- Files: `src/contexts/ThemeContext.tsx`
- Estimate: small
- [ ] `T2` — Add CSS custom properties
- Files: `src/styles/variables.css`
- Estimate: small
### Wave 2 (depends on Wave 1)
- [ ] `T3` — Create toggle component
- Files: `src/components/ThemeToggle.tsx`
- Depends: T1
- Estimate: small
### Wave 3 (depends on Wave 2)
- [ ] `T4` — Integration tests
- Files: `tests/theme.test.ts`
- Depends: T1, T2, T3
- Estimate: medium
Status markers:
[ ] — pending[~] — in progress[x] — complete[!] — failed (needs remediation)Context construction is handled by the build-context.sh script:
bash skill/scripts/build-context.sh .specclaw <change> <TASK_ID>
The script automatically assembles a complete context payload containing:
spec.md (requirements, acceptance criteria)design.md (architecture, approach)tasks.mdFiles: fieldThe output is a single string ready to pass directly as the task parameter to sessions_spawn. Do not manually construct context — always use the script to ensure consistency and freshness.
See templates/config.yaml for the full config schema.
Key settings:
models.planning — model for proposals, specs, design (default: opus)models.coding — model for implementation (default: codex)models.review — model for verification (default: sonnet)git.strategy — "branch-per-change", "direct", or "worktree-per-change"notifications.channel — where to send updatesautomation.max_tasks_per_run — limit for auto modeWhen github.sync: true in config.yaml, SpecClaw creates a GitHub Issue per change and tracks progress as a task checklist. Requires gh CLI (authenticated) or GITHUB_TOKEN environment variable.
Run bash skill/scripts/gh-sync.sh setup to verify auth and create labels.