Output-Driven Dev

v1.0.0

Guides defining success criteria and verification before coding to ensure deliverables are proven complete through measurable, reproducible evidence.

0· 138·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for amdf01-debug/sw-output-driven-dev.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Output-Driven Dev" (amdf01-debug/sw-output-driven-dev) from ClawHub.
Skill page: https://clawhub.ai/amdf01-debug/sw-output-driven-dev
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install sw-output-driven-dev

ClawHub CLI

Package manager switcher

npx clawhub@latest install sw-output-driven-dev
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (define success criteria and verification) align with the SKILL.md. There are no requested env vars, binaries, or installs that would be unrelated to its stated goal.
Instruction Scope
Runtime instructions are limited to defining outputs, writing verification steps, and documenting evidence. Example verification items mention running commands, opening URLs, and checking files as illustrative verification actions — which are appropriate for a verification template. The SKILL.md does not instruct reading unrelated system files, collecting secrets, or sending data to external endpoints outside normal verification activities.
Install Mechanism
No install spec and no code files — nothing is written to disk or downloaded. This is the lowest-risk model for a skill of this type.
Credentials
The skill declares no environment variables, credentials, or config paths. That is proportionate to an instruction-only checklist that only guides how to define and run verification.
Persistence & Privilege
always is false and the skill is user-invocable with normal autonomous-invocation allowed. That is expected for a helper skill; it does not request permanent presence or modify other skills or system-wide settings.
Assessment
This skill is a benign, instruction-only template to help agents and humans define success criteria and verification steps. It doesn't request credentials or install anything. Practical considerations before installing: (1) verification steps may instruct the agent to run commands, access URLs, or read files — ensure the agent's runtime permissions and network access are appropriately limited so verification actions cannot access sensitive data or exfiltrate information; (2) review any concrete verification commands the agent intends to run before allowing autonomous invocation; (3) if you want to restrict automation, keep the skill user-invocable only or apply agent-level policies that require approval for actions that touch network, filesystem, or external services.

Like a lobster shell, security has layers — review code before you run it.

agentsvk97ck7y60jyer3728py2c5xzg583dd21developmentvk97ck7y60jyer3728py2c5xzg583dd21latestvk97ck7y60jyer3728py2c5xzg583dd21planningvk97ck7y60jyer3728py2c5xzg583dd21
138downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Output-Driven Development Skill

Trigger

Define success criteria and verification BEFORE coding. Agents prove their work.

Trigger phrases: "define success criteria", "output-driven", "verify before done", "prove it works", "acceptance criteria"

Process

  1. Define output: What exactly should the result look like?
  2. Write verification: How will we prove it works?
  3. Build: Implement the solution
  4. Verify: Run verification, show evidence
  5. Ship: Only after verification passes

Template

# Task: [Description]

## Success Criteria
- [ ] [Specific, measurable criterion 1]
- [ ] [Specific, measurable criterion 2]
- [ ] [Specific, measurable criterion 3]

## Verification Plan
For each criterion, how to verify:
1. [Run command X, expect output Y]
2. [Open URL, see element Z]
3. [Check file, contains content W]

## Build Log
[What was implemented and how]

## Verification Results
- Criterion 1: ✅ PASS — [evidence]
- Criterion 2: ✅ PASS — [evidence]
- Criterion 3: ❌ FAIL — [what went wrong, fix plan]

Rules

  • Never claim "done" without showing verification evidence
  • "Should work" is not verification — run it and show the output
  • If you can't define success criteria, you don't understand the task
  • Verification should be reproducible by anyone
  • Failed verification → fix → re-verify (don't skip)
  • Screenshots, logs, test output > "I checked and it works"

Comments

Loading comments...