Install
openclaw skills install tdd-eccTest-driven development with red-green-refactor loop and de-sloppify pattern. Use when user wants to build features or fix bugs using TDD, mentions "red-gree...
openclaw skills install tdd-eccCore principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.
See tests.md for examples and mocking.md for mocking guidelines.
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
WRONG (horizontal):
RED: test1, test2, test3, test4, test5
GREEN: impl1, impl2, impl3, impl4, impl5
RIGHT (vertical):
RED→GREEN: test1→impl1
RED→GREEN: test2→impl2
RED→GREEN: test3→impl3
...
Before writing any code:
Ask: "What should the public interface look like? Which behaviors are most important to test?"
You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
Write ONE test that confirms ONE thing about the system:
RED: Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes
This is your tracer bullet - proves the path works end-to-end.
For each remaining behavior:
RED: Write next test → fails
GREEN: Minimal code to pass → passes
Rules:
After all tests pass, look for refactor candidates:
Never refactor while RED. Get to GREEN first.
---
## The De-Sloppify Pattern
**An add-on pattern for TDD workflows.** Add a dedicated cleanup/refactor step after each implementation phase.
### The Problem
When you implement with TDD, LLMs take "write tests" too literally:
- Tests that verify TypeScript's type system works (testing `typeof x === 'string'`)
- Overly defensive runtime checks for things the type system already guarantees
- Tests for framework behavior rather than business logic
- Excessive error handling that obscures the actual code
### Why Not Negative Instructions?
Adding "don't test type systems" or "don't add unnecessary checks" to the implementer prompt has downstream effects:
- The model becomes hesitant about ALL testing
- It skips legitimate edge case tests
- Quality degrades unpredictably
### The Solution: Separate Pass
Instead of constraining the implementer, let it be thorough. Then add a focused cleanup agent:
```bash
# Step 1: Implement (let it be thorough)
claude -p "Implement the feature with full TDD. Be thorough with tests."
# Step 2: De-sloppify (separate context, focused cleanup)
claude -p "Review all changes in the working tree. Remove:
- Tests that verify language/framework behavior rather than business logic
- Redundant type checks that the type system already enforces
- Over-defensive error handling for impossible states
- Console.log statements
- Commented-out code
Keep all business logic tests. Run the test suite after cleanup to ensure nothing breaks."
for feature in "${features[@]}"; do
# Implement
claude -p "Implement $feature with TDD."
# De-sloppify
claude -p "Cleanup pass: review changes, remove test/code slop, run tests."
# Verify
claude -p "Run build + lint + tests. Fix any failures."
# Commit
claude -p "Commit with message: feat: add $feature"
done
Rather than adding negative instructions which have downstream quality effects, add a separate de-sloppify pass. Two focused agents outperform one constrained agent.
## Cleanup Pass Checklist
### Tests to Remove
- [ ] Tests verifying language features (TypeScript types, JS prototypes)
- [ ] Tests verifying framework behavior (React rendering, Next.js routing)
- [ ] Tests for impossible states (already prevented by type system)
- [ ] Duplicate test coverage (same scenario tested multiple ways)
### Code to Remove
- [ ] Redundant type guards after TypeScript checks
- [ ] Unnecessary runtime validations
- [ ] Console.log statements
- [ ] Commented-out code
- [ ] Dead code (unused functions/imports)
### What to Keep
- [ ] Business logic tests
- [ ] Integration tests
- [ ] Edge case handling for real scenarios
- [ ] Security validations
Two focused agents outperform one constrained agent.