Testing Patterns

Unit, integration, and E2E testing patterns with framework-specific guidance. Use when asked to "write tests", "add test coverage", "testing strategy", "test this function", "create test suite", "fix flaky tests", or "improve test quality".

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 2.6k · 24 current installs · 24 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the provided assets (SKILL.md, README, templates). The content is purely guidance for writing tests and does not request unrelated binaries, credentials, or access.
Instruction Scope
Runtime instructions are documentation and templates for testing patterns (unit/integration/E2E). They do not instruct the agent to read arbitrary system files, access secrets, call external endpoints, or perform data exfiltration.
Install Mechanism
There is no formal install spec in the skill bundle (instruction-only). README suggests an `npx add` command pointing at a GitHub tree and manual copy commands from local paths (e.g., ~/.ai-skills). These are documentation hints rather than automated installers — verify any URL or copy commands before running them.
Credentials
The skill declares no required environment variables, credentials, or config paths. The instructions do not reference secrets or unrelated environment state.
Persistence & Privilege
Skill is user-invocable and not always-enabled. It does not request persistent privileges, modify other skills, or require system-wide config changes.
Assessment
This skill is documentation-only and appears coherent with its purpose (testing guidance and templates). Before installing or running any commands from the README, verify the source repository and the exact URLs (the README's `npx add` points to a GitHub tree and the manual steps reference local paths). If you plan to copy files into your home or project, inspect the files locally first. Because the skill's source is listed as unknown, prefer obtaining it from a trusted repo or maintainer and check license/attribution before use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk97e8nzrjpkp6kkxxfvh32hk3980xbfv

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Testing Patterns

Write tests that catch bugs, not tests that pass. — Confidence through coverage, speed through isolation.


Testing Pyramid

LevelRatioSpeedCostConfidenceScope
Unit~70%msLowLow (isolated)Single function/class
Integration~20%secondsMediumMediumModule boundaries, APIs, DB
E2E~10%minutesHighHigh (realistic)Full user workflows

Rule: If your E2E tests outnumber your unit tests, invert the pyramid.


Unit Testing Patterns

Core Patterns

PatternWhen to UseStructure
Arrange-Act-AssertDefault for all unit testsSetup, Execute, Verify
Given-When-ThenBDD-style, behavior-focusedPrecondition, Action, Outcome
ParameterizedSame logic, multiple inputsData-driven test cases
SnapshotUI components, serialized outputCompare against saved baseline
Property-BasedMathematical invariantsGenerate random inputs, assert properties

Arrange-Act-Assert (AAA)

The default structure for every unit test. Clear separation of setup, execution, and verification makes tests readable and maintainable.

// Clean AAA structure
test('calculates order total with tax', () => {
  // Arrange
  const items = [{ price: 10, qty: 2 }, { price: 5, qty: 1 }];
  const taxRate = 0.08;

  // Act
  const total = calculateTotal(items, taxRate);

  // Assert
  expect(total).toBe(27.0);
});

Test Doubles

Use the right type of test double for the situation. Each serves a different purpose.

DoublePurposeWhen to UseExample
StubReturns canned dataControl indirect inputjest.fn().mockReturnValue(42)
MockVerifies interactionsAssert something was calledexpect(mock).toHaveBeenCalledWith('arg')
SpyWraps real implementationObserve without replacingjest.spyOn(service, 'save')
FakeWorking simplified implNeed realistic behaviorIn-memory database, fake HTTP server
// Stub — control indirect input
const getUser = jest.fn().mockResolvedValue({ id: 1, name: 'Alice' });

// Spy — observe without replacing
const spy = jest.spyOn(logger, 'warn');
processInvalidInput(data);
expect(spy).toHaveBeenCalledWith('Invalid input received');

// Fake — lightweight substitute
class FakeUserRepo implements UserRepository {
  private users = new Map<string, User>();
  async save(user: User) { this.users.set(user.id, user); }
  async findById(id: string) { return this.users.get(id) ?? null; }
}

Parameterized Tests

Use parameterized tests when the same logic needs verification with multiple inputs. This eliminates copy-paste tests while providing comprehensive coverage.

// Vitest/Jest
test.each([
  ['hello', 'HELLO'],
  ['world', 'WORLD'],
  ['', ''],
  ['123abc', '123ABC'],
])('toUpperCase(%s) returns %s', (input, expected) => {
  expect(input.toUpperCase()).toBe(expected);
});
# pytest
@pytest.mark.parametrize("input,expected", [
    ("hello", "HELLO"),
    ("world", "WORLD"),
    ("", ""),
])
def test_to_upper(input, expected):
    assert input.upper() == expected
// Go — table-driven tests (idiomatic)
func TestAdd(t *testing.T) {
    tests := []struct {
        name     string
        a, b     int
        expected int
    }{
        {"positive", 2, 3, 5},
        {"zero", 0, 0, 0},
        {"negative", -1, -2, -3},
    }
    for _, tc := range tests {
        t.Run(tc.name, func(t *testing.T) {
            if got := Add(tc.a, tc.b); got != tc.expected {
                t.Errorf("Add(%d,%d) = %d, want %d", tc.a, tc.b, got, tc.expected)
            }
        })
    }
}

Integration Testing Patterns

Database Testing Strategies

StrategyApproachTrade-off
Transaction rollbackWrap each test in a transaction, rollback afterFast, but hides commit bugs
Fixtures/seedsLoad known data before suitePredictable, but brittle if schema changes
Factory functionsGenerate data programmaticallyFlexible, but more setup code
TestcontainersSpin up real DB in DockerRealistic, but slower startup
// Transaction rollback pattern (Prisma)
beforeEach(async () => {
  await prisma.$executeRaw`BEGIN`;
});
afterEach(async () => {
  await prisma.$executeRaw`ROLLBACK`;
});

test('creates user in database', async () => {
  const user = await createUser({ name: 'Alice', email: 'a@b.com' });
  const found = await prisma.user.findUnique({ where: { id: user.id } });
  expect(found?.name).toBe('Alice');
});

API Testing

// Supertest (Node.js)
import request from 'supertest';
import { app } from '../src/app';

describe('POST /api/users', () => {
  it('creates a user and returns 201', async () => {
    const res = await request(app)
      .post('/api/users')
      .send({ name: 'Alice', email: 'alice@test.com' })
      .expect(201);

    expect(res.body).toMatchObject({
      id: expect.any(String),
      name: 'Alice',
    });
  });

  it('returns 400 for invalid email', async () => {
    await request(app)
      .post('/api/users')
      .send({ name: 'Alice', email: 'not-an-email' })
      .expect(400);
  });
});

Mocking Best Practices

Mock Boundaries, Not Implementations

The fundamental rule: mock at system boundaries (external APIs, databases, file systems) and never mock internal domain logic.

// BAD — mocking internal implementation
jest.mock('./utils/formatDate');  // Breaks on refactor

// GOOD — mocking external boundary
jest.mock('./services/paymentGateway');  // Third-party API is the boundary

When to Mock vs Not Mock

MockDon't Mock
HTTP APIs, external servicesPure functions
Database (in unit tests)Your own domain logic
File system, networkData transformations
Time/Date (Date.now)Simple calculations
Environment variablesInternal class methods

Dependency Injection for Testability

Structure code so dependencies can be swapped in tests. This is the single most impactful pattern for testable code.

// Injectable dependencies — easy to test
class OrderService {
  constructor(
    private paymentGateway: PaymentGateway,
    private inventory: InventoryService,
    private notifier: NotificationService,
  ) {}

  async placeOrder(order: Order): Promise<OrderResult> {
    const stock = await this.inventory.check(order.items);
    if (!stock.available) return { status: 'out_of_stock' };

    const payment = await this.paymentGateway.charge(order.total);
    if (!payment.success) return { status: 'payment_failed' };

    await this.notifier.send(order.userId, 'Order confirmed');
    return { status: 'confirmed', id: payment.transactionId };
  }
}

// In tests — inject fakes
const service = new OrderService(
  new FakePaymentGateway(),
  new FakeInventory({ available: true }),
  new FakeNotifier(),
);

Framework Quick Reference

FrameworkLanguageTypeTest RunnerAssertion
JestJS/TSUnit/IntegrationBuilt-inexpect()
VitestJS/TSUnit/IntegrationVite-nativeexpect() (Jest-compatible)
PlaywrightJS/TS/PythonE2EBuilt-inexpect() / locators
CypressJS/TSE2EBuilt-incy.should()
pytestPythonUnit/IntegrationBuilt-inassert
Go testingGoUnit/Integrationgo testt.Error() / testify
RustRustUnit/Integrationcargo testassert!() / assert_eq!()
JUnit 5Java/KotlinUnit/IntegrationBuilt-inassertEquals()
RSpecRubyUnit/IntegrationBuilt-inexpect().to
PHPUnitPHPUnit/IntegrationBuilt-in$this->assert*()
xUnitC#Unit/IntegrationBuilt-inAssert.Equal()

Test Quality Checklist

QualityRuleWhy
DeterministicSame input produces same result, every timeFlaky tests erode trust
IsolatedNo shared mutable state between testsOrder-dependent tests break in CI
FastUnit: < 10ms, Integration: < 1s, E2E: < 30sSlow tests don't get run
ReadableTest name describes the scenario and expectationTests are documentation
MaintainableChange one behavior, change one testBrittle tests slow development
FocusedOne logical assertion per testFailures pinpoint the problem

Naming convention: test_[unit]_[scenario]_[expected result] or should [do X] when [condition Y]


Coverage Strategy

When to Aim for What

TargetWhenRationale
80%+ line coverageBusiness logic, utilities, core domainHigh ROI — catches most regressions
90%+ branch coveragePayment processing, auth, security-criticalEdge cases matter here
100% coverageAlmost never — diminishing returnsGetter/setter tests add noise, not confidence
Mutation testingCritical paths after coverage is highVerifies tests actually catch bugs

What NOT to Test

SkipReason
Generated code (Prisma client, protobuf)Maintained by tooling
Third-party library internalsNot your responsibility
Simple getters/settersNo logic to verify
Configuration filesTest the behavior they configure instead
Console.log / print statementsSide effects with no business value

Test Organization

src/
├── services/
│   ├── order.service.ts
│   └── order.service.test.ts      # Co-located unit tests
├── api/
│   └── routes/
│       └── orders.ts
tests/
├── integration/
│   ├── api/
│   │   └── orders.test.ts         # API integration tests
│   └── db/
│       └── order.repo.test.ts     # DB integration tests
├── e2e/
│   ├── pages/                     # Page objects
│   │   └── checkout.page.ts
│   └── specs/
│       └── checkout.spec.ts       # E2E specs
└── helpers/
    ├── factories.ts               # Test data factories
    └── setup.ts                   # Global test setup

Rule: Co-locate unit tests with source. Separate integration and E2E tests into dedicated directories.


Anti-Patterns

Anti-PatternProblemFix
Testing implementationTests break on refactor, not on bugsTest behavior and outputs, not internals
Flaky testsNon-deterministic failures erode CI trustRemove time/order/network dependencies
Test pollutionShared mutable state leaks between testsReset state in beforeEach / setUp
Sleeping in testssleep(2000) is slow and unreliableUse explicit waits, polling, or events
Giant arrange50 lines of setup obscure intentExtract factories/builders/fixtures
Assert-free testsTest runs but verifies nothingEvery test must assert or expect
OvermockingMocking everything tests nothing realOnly mock external boundaries
Copy-paste testsDuplicated tests diverge and rotUse parameterized tests or helpers
Testing the frameworkVerifying library code worksTest your logic, trust dependencies
Ignoring test failuresskip, xit, @Disabled accumulateFix or delete — never hoard skipped tests
Tight coupling to DBTests fail when schema changesUse repository pattern + fakes for unit tests
One giant testSingle test covers 10 scenariosSplit into focused, named tests
No test for bug fixRegression reappears laterEvery bug fix gets a regression test

NEVER Do

  1. NEVER test implementation details instead of behavior — tests must verify what the code does, not how it does it
  2. NEVER use sleep() in tests — use explicit waits, polling, events, or assertions that auto-retry
  3. NEVER share mutable state between tests — each test sets up and tears down its own state
  4. NEVER write assert-free tests — a test that asserts nothing proves nothing
  5. NEVER mock internal domain logic — only mock at system boundaries (network, DB, filesystem, clock)
  6. NEVER skip tests without a linked issue and a plan to re-enable — skipped tests rot into permanent gaps
  7. NEVER leave a test suite in a failing state — fix it or remove it with justification before moving on
  8. NEVER chase 100% coverage as a goal — coverage percentage is a tool, not a target; strong assertions on critical paths beat weak assertions everywhere

Summary

DoDon't
Test behavior, not implementationMock everything in sight
Write the test before fixing a bugSkip tests to ship faster
Keep tests fast and deterministicUse sleep() or shared state
Use factories for test dataCopy-paste setup across tests
Mock at system boundariesMock internal functions
Name tests descriptivelyName tests test1, test2
Run tests in CI on every pushOnly run tests locally
Delete or fix skipped testsLet @skip accumulate forever
Use parameterized tests for variantsDuplicate test code
Inject dependencies for testabilityHard-code dependencies

Remember: Tests are a safety net — a fast, trustworthy suite lets you refactor fearlessly and ship with confidence.

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…