Test Driven Development

v2.0.0

Test-driven development with red-green-refactor loop and de-sloppify pattern. Use when user wants to build features or fix bugs using TDD, mentions "red-gree...

0· 378·2 current·2 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for huamu668/tdd-ecc.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Test Driven Development" (huamu668/tdd-ecc) from ClawHub.
Skill page: https://clawhub.ai/huamu668/tdd-ecc
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install tdd-ecc

ClawHub CLI

Package manager switcher

npx clawhub@latest install tdd-ecc
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, and included files are consistent: this is an instruction-only Test-Driven Development guide (red-green-refactor, de-sloppify, refactoring, mocking, tests). There are no unrelated required binaries, env vars, or installs.
!
Instruction Scope
SKILL.md contains explicit bash examples that invoke an external LLM CLI (e.g., `claude -p "Review all changes in the working tree..."`) and a loop that suggests sending repo changes to that LLM. If an automated agent follows those instructions, it could transmit the repository/working tree (potentially including secrets) to an external service. The guidance also instructs running builds/tests and committing — those are reasonable for TDD but combined with the external-LLM examples create a plausible exfiltration vector. The SKILL.md also contains illustrative examples referencing environment-based secrets (process.env.STRIPE_KEY) in mocking.md; those are examples, not declared requirements, but they highlight where secrets could be referenced.
Install Mechanism
No install spec or code files beyond documentation. Instruction-only skills have low installation risk because nothing is downloaded or written by an installer.
Credentials
The skill declares no required environment variables or credentials (none needed for the documented guidance). Some example snippets reference environment variables (e.g., STRIPE_KEY) as illustrative testability guidance, but the skill does not require access to them. Still, following examples naively could cause an agent to read or use such secrets.
Persistence & Privilege
always is false and the skill is user-invocable with normal autonomous invocation allowed. The skill does not request persistent system presence or modify other skills/config; no privilege escalation indicators in the metadata.
What to consider before installing
This skill is primarily a TDD playbook and appears internally consistent, but pay attention to the example workflow that calls an external LLM CLI (claude) with prompts to "review all changes in the working tree." If an agent executes those steps automatically, it could send your source code, tests, or secrets to an external service. Before installing or using this skill: 1) Confirm whether your agent will actually execute the example CLI calls; if so, disable or rewrite them to avoid sending full repo contents (send minimal diffs or sanitized snippets instead). 2) Remove or adapt the 'claude' examples to use an approved internal tool or an audited endpoint, and avoid sending secrets or entire working trees. 3) Review the prompts the skill suggests and redact sensitive data before any outbound call. 4) If you need automated cleanup/review, prefer local tools or a self-hosted LLM with strict data controls. If you want, I can suggest safer replacements for the external-LLM examples (e.g., run a local linter/test-runner and post only diffs to a review service).

Like a lobster shell, security has layers — review code before you run it.

latestvk978k73qbcsnbbbjtegtgymscd82h42cqualityvk978k73qbcsnbbbjtegtgymscd82h42ctddvk978k73qbcsnbbbjtegtgymscd82h42ctestingvk978k73qbcsnbbbjtegtgymscd82h42cworkflowvk978k73qbcsnbbbjtegtgymscd82h42c
378downloads
0stars
1versions
Updated 1mo ago
v2.0.0
MIT-0

Test-Driven Development

Philosophy

Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.

Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.

Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.

See tests.md for examples and mocking.md for mocking guidelines.

Anti-Pattern: Horizontal Slices

DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."

This produces crap tests:

  • Tests written in bulk test imagined behavior, not actual behavior
  • You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
  • Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
  • You outrun your headlights, committing to test structure before understanding the implementation

Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.

WRONG (horizontal):
  RED:   test1, test2, test3, test4, test5
  GREEN: impl1, impl2, impl3, impl4, impl5

RIGHT (vertical):
  RED→GREEN: test1→impl1
  RED→GREEN: test2→impl2
  RED→GREEN: test3→impl3
  ...

Workflow

1. Planning

Before writing any code:

  • Confirm with user what interface changes are needed
  • Confirm with user which behaviors to test (prioritize)
  • Identify opportunities for deep modules (small interface, deep implementation)
  • Design interfaces for testability
  • List the behaviors to test (not implementation steps)
  • Get user approval on the plan

Ask: "What should the public interface look like? Which behaviors are most important to test?"

You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.

2. Tracer Bullet

Write ONE test that confirms ONE thing about the system:

RED:   Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes

This is your tracer bullet - proves the path works end-to-end.

3. Incremental Loop

For each remaining behavior:

RED:   Write next test → fails
GREEN: Minimal code to pass → passes

Rules:

  • One test at a time
  • Only enough code to pass current test
  • Don't anticipate future tests
  • Keep tests focused on observable behavior

4. Refactor

After all tests pass, look for refactor candidates:

  • Extract duplication
  • Deepen modules (move complexity behind simple interfaces)
  • Apply SOLID principles where natural
  • Consider what new code reveals about existing code
  • Run tests after each refactor step

Never refactor while RED. Get to GREEN first.

Checklist Per Cycle

---

## The De-Sloppify Pattern

**An add-on pattern for TDD workflows.** Add a dedicated cleanup/refactor step after each implementation phase.

### The Problem

When you implement with TDD, LLMs take "write tests" too literally:
- Tests that verify TypeScript's type system works (testing `typeof x === 'string'`)
- Overly defensive runtime checks for things the type system already guarantees
- Tests for framework behavior rather than business logic
- Excessive error handling that obscures the actual code

### Why Not Negative Instructions?

Adding "don't test type systems" or "don't add unnecessary checks" to the implementer prompt has downstream effects:
- The model becomes hesitant about ALL testing
- It skips legitimate edge case tests
- Quality degrades unpredictably

### The Solution: Separate Pass

Instead of constraining the implementer, let it be thorough. Then add a focused cleanup agent:

```bash
# Step 1: Implement (let it be thorough)
claude -p "Implement the feature with full TDD. Be thorough with tests."

# Step 2: De-sloppify (separate context, focused cleanup)
claude -p "Review all changes in the working tree. Remove:
- Tests that verify language/framework behavior rather than business logic
- Redundant type checks that the type system already enforces
- Over-defensive error handling for impossible states
- Console.log statements
- Commented-out code

Keep all business logic tests. Run the test suite after cleanup to ensure nothing breaks."

In a Loop Context

for feature in "${features[@]}"; do
  # Implement
  claude -p "Implement $feature with TDD."

  # De-sloppify
  claude -p "Cleanup pass: review changes, remove test/code slop, run tests."

  # Verify
  claude -p "Run build + lint + tests. Fix any failures."

  # Commit
  claude -p "Commit with message: feat: add $feature"
done

Key Insight

Rather than adding negative instructions which have downstream quality effects, add a separate de-sloppify pass. Two focused agents outperform one constrained agent.

De-Sloppify Checklist

## Cleanup Pass Checklist

### Tests to Remove
- [ ] Tests verifying language features (TypeScript types, JS prototypes)
- [ ] Tests verifying framework behavior (React rendering, Next.js routing)
- [ ] Tests for impossible states (already prevented by type system)
- [ ] Duplicate test coverage (same scenario tested multiple ways)

### Code to Remove
- [ ] Redundant type guards after TypeScript checks
- [ ] Unnecessary runtime validations
- [ ] Console.log statements
- [ ] Commented-out code
- [ ] Dead code (unused functions/imports)

### What to Keep
- [ ] Business logic tests
- [ ] Integration tests
- [ ] Edge case handling for real scenarios
- [ ] Security validations

Two focused agents outperform one constrained agent.

Comments

Loading comments...