Xiaobai Workflow Enforcer

v1.0.0

Xiaobai Workflow Enforcer - Mandatory workflows for AI Agents. Design before code. Test before implement. Verify before claim. Inspired by Superpowers (161K...

0· 89·0 current·0 all-time
byErwin@aptratcn

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aptratcn/xiaobai-workflow-enforcer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Xiaobai Workflow Enforcer" (aptratcn/xiaobai-workflow-enforcer) from ClawHub.
Skill page: https://clawhub.ai/aptratcn/xiaobai-workflow-enforcer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install xiaobai-workflow-enforcer

ClawHub CLI

Package manager switcher

npx clawhub@latest install xiaobai-workflow-enforcer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (enforcing development workflows) match the SKILL.md content. The skill only instructs process steps (design, plan, TDD, verification) and does not request unrelated binaries, credentials, or external services.
Instruction Scope
Instructions are prescriptive about agent behavior (ask questions, produce design, write tests, run pytest, save checkpoints). This is coherent for a workflow enforcer, but it grants the agent discretion to create files and run local commands; the skill does not specify storage paths or safety constraints, so runtime policy/permissions determine actual impact.
Install Mechanism
No install spec and no code files—this is instruction-only, so nothing is written to disk by the skill itself and no external packages are pulled in by the skill.
Credentials
The skill declares no required environment variables, credentials, or config paths. The runtime suggestions (running pytest, python -c, saving files) are consistent with a development workflow and do not demand unrelated secrets or service access.
Persistence & Privilege
always is false and the skill is user-invocable; it does not request persistent system-wide privileges or modify other skills' configs. Autonomous invocation is allowed by default but not combined with other risky requests.
Assessment
This skill only contains instructions (no code, no installs, no credentials) and is coherent with its goal of enforcing development workflows. Before installing, consider that the agent will be instructed to create files, run tests/commands (pytest, python), and present outputs — make sure the agent runtime has appropriate file-system and command-execution permissions you are comfortable granting. Also review designs and checkpoint file locations produced by the agent before allowing automated execution, and limit autonomous invocation if you prefer manual approvals for actions that run code or modify your workspace.

Like a lobster shell, security has layers — review code before you run it.

enforcementvk97dfwmak2qzrzaags0rh77vk5858t26latestvk97dfwmak2qzrzaags0rh77vk5858t26reliabilityvk97dfwmak2qzrzaags0rh77vk5858t26tddvk97dfwmak2qzrzaags0rh77vk5858t26workflowvk97dfwmak2qzrzaags0rh77vk5858t26
89downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Xiaobai Workflow Enforcer 🔒

Mandatory workflows for AI Agents. Not suggestions, not "when appropriate" — mandatory.

Inspired by Superpowers (161K stars) which proved that enforced workflows transform chaotic AI outputs into reliable engineering.

Core Philosophy

Superpowers PrincipleXiaobai Implementation
Test-Driven DevelopmentEVR + TDD skill
Systematic over ad-hocWorkflow Checkpoint
Complexity reductionSimplicity Check
Evidence over claimsVerification Gate

Mandatory Workflows

Workflow 1: Pre-Action Design Gate 🔒

Trigger: Before any multi-step task or code creation

Mandatory Steps:

  1. STOP. Don't write code yet.
  2. Ask clarifying questions (minimum 3)
  3. Present design/spec in chunks
  4. Get user sign-off on design
  5. Save design document
❌ Wrong:
User: Build me a scraper
Agent: [Writes code]

✅ Right:
User: Build me a scraper
Agent: Before I code, let me understand:
       1. What site are we scraping?
       2. What data do you need?
       3. How often should it run?
       4. Any rate limits to consider?
       [After answers, presents design]
       Does this design match what you need?

Workflow 2: Implementation Planning 🔒

Trigger: After design approval, before implementation

Mandatory Steps:

  1. Break into 2-5 minute tasks
  2. Each task has: file path, exact code, verification step
  3. Present plan for approval
  4. Save plan to checkpoint file
Plan Format:

## Task 1: Create scraper module (3 min)
- File: src/scraper.py
- Code: [exact code or pseudocode]
- Verify: `python -c "import scraper"`

## Task 2: Add rate limiting (2 min)
- File: src/scraper.py
- Code: [exact changes]
- Verify: Run with test request, check delay

...

Workflow 3: Test-First Gate 🔒

Trigger: Before implementing any function

Mandatory Steps:

  1. Write test first
  2. Run test, confirm it FAILS (RED)
  3. Write minimal code to pass
  4. Run test, confirm it PASSES (GREEN)
  5. Refactor if needed
  6. Commit only after GREEN
❌ Wrong:
[Writes function]
[Tests it manually]
"It works"

✅ Right:
1. Write test_function()
2. Run: pytest test_module.py
3. See: FAILED (expected)
4. Write function()
5. Run: pytest test_module.py
6. See: PASSED
7. Commit

Workflow 4: Execution Gate 🔒

Trigger: During task execution

Mandatory Steps:

  1. Read task from plan
  2. Execute exactly as planned
  3. Verify (run command, check output)
  4. Update checkpoint
  5. Only then move to next task
Checkpoint Update:
- Task 1: DONE (verified: scraper.py imports successfully)
- Task 2: IN_PROGRESS
- Tasks 3-5: PENDING

Workflow 5: Verification Gate 🔒

Trigger: Before claiming "done" or "complete"

Mandatory Steps:

  1. Run verification command
  2. Show output to user
  3. Confirm evidence matches claim
  4. Only then say "done"
❌ Wrong:
"Scraper is done!"

✅ Right:
"Scraper implementation complete.

Verification:
- Module imports: ✅
- Test suite passes: ✅ (5/5)
- Sample scrape works: ✅

Evidence:
[Output from test run]

Would you like me to proceed with deployment?"

Workflow Enforcement Protocol

Before Any Action

1. Is this a multi-step task?
   → Yes → Trigger Workflow 1 (Design Gate)

2. Is there a plan?
   → No → Trigger Workflow 2 (Planning)

3. Does this involve code?
   → Yes → Trigger Workflow 3 (Test-First)

4. Is task in progress?
   → Yes → Trigger Workflow 4 (Execution Gate)

5. About to say "done"?
   → Yes → Trigger Workflow 5 (Verification Gate)

Blockers That Must Stop Progress

ConditionAction
No design docDon't code, ask questions first
No planDon't execute, create plan first
No testDon't write function, write test first
Test failingDon't continue, fix the code
No verificationDon't say "done", verify first

Integration with Other Xiaobai Skills

  • EVR Framework — Verification gate implementation
  • Workflow Checkpoint — Plan and progress tracking
  • Skill Quality Eval — Measure workflow compliance
  • Self-Improve — Learn from workflow violations

Anti-Patterns (What This Skill Prevents)

Anti-PatternWhy It's BadWorkflow Fix
Jumping to codeSolves wrong problemDesign Gate
No planChaotic executionPlanning Gate
Write-then-testTests that pass triviallyTest-First Gate
Skipping verificationSilent failuresVerification Gate
Claiming done prematurelyUser finds out laterExecution Gate

Quick Reference Card

Before Coding:    DESIGN → APPROVE → PLAN → APPROVE
While Coding:     TEST(RED) → CODE → TEST(GREEN) → REFACTOR
After Coding:     VERIFY → EVIDENCE → REPORT
Always:           CHECKPOINT after each step

License

MIT

Comments

Loading comments...