QA & Test Engineering Command Center

Comprehensive QA system for planning strategy, writing tests, analyzing coverage, automating pipelines, performance and security testing, defect triage, and...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 580 · 2 current installs · 2 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (comprehensive QA/test engineering) match the SKILL.md content: strategy, unit/integration/E2E patterns, performance and security testing, CI/CD gating, and defect triage. The README and instructions are consistent with a QA advisory/automation role.
Instruction Scope
SKILL.md gives runtime guidance that includes starting test servers, running test databases/migrations, recording/replaying external API responses, and configuring CI gates. These actions are reasonable for a QA skill, but they imply the agent may need access to the codebase, test infrastructure, or test credentials to be useful — the instructions themselves do not ask for those credentials, but they assume the agent will be granted them by the environment.
Install Mechanism
No install spec and no code files are included; this is instruction-only so nothing is downloaded or written by the skill itself.
Credentials
The skill declares no required environment variables, binaries, or config paths. The SKILL.md mentions mocking environment variables and interacting with services for testing, which is appropriate contextually and does not demand extraneous credentials from the user.
Persistence & Privilege
always is false and the skill does not request persistent system-level presence or attempt to modify other skills. Autonomous invocation is allowed (platform default) but not unusual for this type of skill.
Assessment
This skill is internally consistent and doesn't ask for secrets or installs. Before enabling it, consider: 1) only grant the agent the minimum access needed (avoid giving production DB or CI tokens; use test/staging credentials), 2) run it in a sandboxed environment or with limited network access if it will execute tests or start services, 3) review any agent actions that would modify infrastructure (migrations, seeding) and require explicit approval, and 4) be cautious with external links or paid context packs in the README — they are marketing and not required for the skill to function. If you want higher assurance, ask the publisher for provenance (who maintains it) or request a version that includes automated checks and explicit runtime permissions.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
automationvk97erhpwy5ax6z5e9cqb9qggmx816qhmci-cdvk97erhpwy5ax6z5e9cqb9qggmx816qhmdevopsvk97erhpwy5ax6z5e9cqb9qggmx816qhme2evk97erhpwy5ax6z5e9cqb9qggmx816qhmintegrationvk97erhpwy5ax6z5e9cqb9qggmx816qhmlatestvk97erhpwy5ax6z5e9cqb9qggmx816qhmperformancevk97erhpwy5ax6z5e9cqb9qggmx816qhmqavk97erhpwy5ax6z5e9cqb9qggmx816qhmsecurityvk97erhpwy5ax6z5e9cqb9qggmx816qhmtestingvk97erhpwy5ax6z5e9cqb9qggmx816qhmunit-testvk97erhpwy5ax6z5e9cqb9qggmx816qhm

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

QA & Test Engineering Command Center

Complete quality assurance system — from test strategy to automation frameworks, coverage analysis, and release readiness. Works for any stack, any team size.

When to Use

  • Planning test strategy for a new feature or project
  • Writing unit, integration, or E2E tests
  • Reviewing test quality and coverage gaps
  • Setting up test automation and CI/CD quality gates
  • Performance testing and load analysis
  • Security testing checklist
  • Bug triage and defect management
  • Release readiness assessment

Phase 1: Test Strategy

Strategy Brief

Before writing any tests, define the strategy:

# test-strategy.yaml
project: "[name]"
scope: "[feature/module/full product]"
risk_level: high | medium | low
stack:
  language: "[TypeScript/Python/Java/Go]"
  framework: "[React/Express/Django/Spring]"
  test_runner: "[Jest/Vitest/pytest/JUnit/Go test]"
  e2e_tool: "[Playwright/Cypress/Selenium]"

# What are we testing?
test_scope:
  - area: "[e.g., Auth module]"
    risk: high
    test_types: [unit, integration, e2e]
    priority: 1
  - area: "[e.g., Settings page]"
    risk: low
    test_types: [unit]
    priority: 3

# What's NOT in scope (and why)
exclusions:
  - "[e.g., Third-party widget — covered by vendor]"

# Quality targets
targets:
  line_coverage: 80
  branch_coverage: 70
  critical_path_coverage: 100
  max_flaky_rate: 2%
  max_test_duration_unit: 10ms
  max_test_duration_integration: 500ms
  max_test_duration_e2e: 30s

Risk-Based Test Allocation

Not everything needs the same testing depth. Use the risk matrix:

Risk LevelUnit TestsIntegrationE2EManual/Exploratory
Critical (payments, auth, data loss)95%+ coverageFull API coverageHappy + error pathsExploratory session
High (core features, user-facing)85%+ coverageKey integrationsHappy pathSpot check
Medium (secondary features)70%+ coverageCritical paths onlySmoke onlyOn release
Low (admin, internal tools)50%+ coverageNoneNoneNone

Test Pyramid

Follow the pyramid — not the ice cream cone:

         /  E2E  \          ← Few (5-10%) — slow, expensive, brittle
        / Integr. \         ← Some (15-25%) — API contracts, DB queries
       /   Unit    \        ← Many (65-80%) — fast, isolated, cheap

Anti-pattern: Ice cream cone (mostly E2E, few unit tests) = slow CI, flaky builds, expensive maintenance.

Decision rule: Can this be tested at a lower level? → Test it there.


Phase 2: Unit Testing

Anatomy of a Good Unit Test

Every unit test follows AAA (Arrange-Act-Assert):

1. ARRANGE — Set up test data, mocks, state
2. ACT     — Call the function/method under test
3. ASSERT  — Verify the output matches expectations

Unit Test Checklist (per function)

For each function/method, verify:

  • Happy path — expected input → expected output
  • Edge cases — empty input, null/undefined, zero, max values
  • Boundary values — off-by-one, min-1, max+1
  • Error handling — invalid input → correct error thrown
  • Return types — correct type, shape, structure
  • Side effects — does it modify state it shouldn't?
  • Idempotency — calling twice gives same result?

What to Mock (and What NOT to Mock)

Mock these:

  • External APIs (HTTP calls, third-party services)
  • Database queries (in unit tests only)
  • File system operations
  • Date/time (use fake timers)
  • Random number generators
  • Environment variables

DO NOT mock these:

  • The function under test itself
  • Pure utility functions (test them directly)
  • Data transformations
  • Simple value objects

Mock rule of thumb: If removing the mock would make the test hit the network, file system, or database → mock it. Otherwise → don't.

Test Naming Convention

Use the pattern: [unit] [scenario] [expected result]

Examples:

  • calculateTotal returns 0 for empty cart
  • validateEmail throws for missing @ symbol
  • parseDate handles ISO 8601 with timezone offset

Coverage Analysis

Metrics that matter:

MetricTargetWhy
Line coverage80%+Basic completeness
Branch coverage70%+Catches missed if/else paths
Function coverage90%+Ensures all functions are tested
Critical path coverage100%Business-critical code fully verified

Coverage traps to avoid:

  • 100% line coverage ≠ good tests (assertions matter more than lines hit)
  • Coverage on generated code inflates numbers
  • Trivial getters/setters pad coverage without value
  • Coverage should INCREASE over time, never decrease

Phase 3: Integration Testing

What Integration Tests Cover

Integration tests verify that components work TOGETHER:

  • API endpoint → middleware → handler → database → response
  • Service A calls Service B and handles the response
  • Message queue producer → consumer → side effect
  • Auth flow: login → token → authenticated request

Integration Test Patterns

Pattern 1: API Contract Testing

1. Start test server (or use supertest/httptest)
2. Send HTTP request with specific payload
3. Assert: status code, response body shape, headers
4. Assert: database state changed correctly
5. Assert: side effects triggered (emails, events)

Pattern 2: Database Integration

1. Start test database (SQLite in-memory or test container)
2. Run migrations
3. Seed test data
4. Execute query/operation
5. Assert: data matches expectations
6. Teardown (truncate or rollback transaction)

Pattern 3: External Service

1. Record real API response (VCR/nock/wiremock)
2. Replay recorded response in tests
3. Assert: your code handles the response correctly
4. Also test: timeout, 500 error, malformed response

Integration Test Checklist

  • Happy path — full flow works end-to-end
  • Auth — unauthenticated returns 401, wrong role returns 403
  • Validation — bad payload returns 400 with error details
  • Not found — missing resource returns 404
  • Conflict — duplicate create returns 409
  • Rate limiting — excessive requests return 429
  • Database constraints — unique violations, foreign keys
  • Concurrency — two simultaneous writes don't corrupt data
  • Timeout handling — external service timeout → graceful fallback

Phase 4: End-to-End (E2E) Testing

E2E Strategy

E2E tests verify complete user journeys. They're expensive — be strategic:

Test these E2E:

  • User registration → email verification → first login
  • Purchase flow → payment → confirmation
  • Critical business workflows (the ones that make money)
  • Cross-browser/device smoke tests

DON'T test these E2E:

  • Individual form validations (unit test)
  • API error handling (integration test)
  • Edge cases (lower-level tests)
  • Visual styling (visual regression tools)

E2E Test Template

test_name: "[User journey name]"
preconditions:
  - "[User is logged in]"
  - "[Product exists in catalog]"
steps:
  - action: "Navigate to /products"
    verify: "Product list is visible"
  - action: "Click 'Add to Cart' on Product A"
    verify: "Cart badge shows 1"
  - action: "Click 'Checkout'"
    verify: "Checkout form displayed"
  - action: "Fill payment details and submit"
    verify: "Order confirmation page with order ID"
postconditions:
  - "Order exists in database with status 'paid'"
  - "Confirmation email sent"
max_duration: 30s

Flaky Test Management

Flaky tests are the #1 CI killer. Handle them:

Flaky Test Triage:

  1. Identify — Track test pass rates over 10+ runs
  2. Classify — Why is it flaky?
    • Timing/race condition → Add explicit waits, not sleep()
    • Test data dependency → Isolate test data per run
    • External service → Mock it or use test container
    • Browser rendering → Use visibility checks, not delays
  3. Quarantine — Move to @flaky suite, run separately
  4. Fix or delete — Flaky test unfixed for 2 weeks → delete it

Flaky rate target: < 2% of total test runs


Phase 5: Performance Testing

Performance Test Types

TypePurposeWhen
Load testNormal traffic handlingBefore every release
Stress testFind breaking pointQuarterly or before scaling
Spike testSudden traffic burstBefore marketing campaigns
Soak testMemory leaks over timeMonthly or after major changes
Capacity testMax users/throughputPlanning infrastructure

Performance Test Plan

test_name: "[API/Page] Load Test"
target: "[URL or endpoint]"
baseline:
  p50_response: "[current p50 ms]"
  p95_response: "[current p95 ms]"
  p99_response: "[current p99 ms]"
  error_rate: "[current %]"

scenarios:
  - name: "Normal load"
    vus: 50          # virtual users
    duration: 5m
    ramp_up: 30s
    thresholds:
      p95_response: "< 500ms"
      error_rate: "< 1%"

  - name: "Peak load"
    vus: 200
    duration: 10m
    ramp_up: 1m
    thresholds:
      p95_response: "< 2000ms"
      error_rate: "< 5%"

  - name: "Stress test"
    vus: 500
    duration: 5m
    ramp_up: 2m
    # Find the breaking point — no thresholds, observe

Performance Metrics Dashboard

Track these per endpoint:

MetricGreenYellowRed
p50 response< 200ms200-500ms> 500ms
p95 response< 500ms500ms-2s> 2s
p99 response< 1s1-5s> 5s
Error rate< 0.1%0.1-1%> 1%
Throughput> baseline80-100% baseline< 80%
CPU usage< 60%60-80%> 80%
Memory usage< 70%70-85%> 85%
DB query time< 50ms avg50-200ms> 200ms

Common Performance Fixes

SymptomLikely CauseFix
Slow API responseN+1 queriesBatch/join queries
Memory climbingObject retentionProfile heap, fix leaks
Timeout spikesConnection pool exhaustionIncrease pool, add queuing
Slow page loadLarge bundleCode split, lazy load
DB bottleneckMissing indexAdd index on WHERE/JOIN columns
High CPUSynchronous computeMove to worker/queue

Phase 6: Security Testing

Security Test Checklist

Run through these for every feature/release:

Authentication & Authorization:

  • Passwords hashed with bcrypt/argon2 (not MD5/SHA1)
  • Session tokens are random, sufficient length (128+ bits)
  • JWT tokens have short expiry (15 min access, 7 day refresh)
  • Failed login rate limiting (5 attempts → lockout)
  • Password reset tokens expire (1 hour max)
  • Role-based access enforced server-side (not just UI)
  • Can't access other users' data by changing IDs in URL

Input Validation:

  • SQL injection — parameterized queries everywhere
  • XSS — output encoding, CSP headers
  • CSRF — tokens on state-changing requests
  • Path traversal — validate file paths, no ../
  • Command injection — never pass user input to shell
  • File upload — validate type, size, scan for malware
  • JSON/XML parsing — depth limits, entity expansion disabled

Data Protection:

  • HTTPS everywhere (HSTS header)
  • Sensitive data encrypted at rest
  • PII not logged (mask in log output)
  • API keys not in client-side code
  • CORS configured correctly (not *)
  • Security headers set (X-Frame-Options, X-Content-Type-Options)

Infrastructure:

  • Dependencies scanned for CVEs (npm audit / pip audit)
  • Docker images scanned (Trivy/Snyk)
  • Secrets not in code/env files (use vault)
  • Error messages don't leak internals
  • Admin endpoints behind VPN/IP allowlist

OWASP Top 10 Quick Reference

#VulnerabilityTest For
A01Broken Access ControlAccess other users' resources, bypass role checks
A02Cryptographic FailuresWeak hashing, plaintext secrets, expired certs
A03InjectionSQL, XSS, command, LDAP injection
A04Insecure DesignBusiness logic flaws, missing rate limits
A05Security MisconfigurationDefault creds, verbose errors, open ports
A06Vulnerable ComponentsOutdated deps with known CVEs
A07Authentication FailuresBrute force, weak passwords, session fixation
A08Data Integrity FailuresUnsigned updates, CI/CD pipeline injection
A09Logging FailuresMissing audit logs, no alerting on breaches
A10SSRFInternal network access via user-controlled URLs

Phase 7: Bug Triage & Defect Management

Bug Report Template

bug_id: "[auto or manual]"
title: "[Short description of the bug]"
severity: P0-critical | P1-high | P2-medium | P3-low
reporter: "[name]"
date: "[YYYY-MM-DD]"

environment:
  os: "[OS + version]"
  browser: "[Browser + version]"
  app_version: "[version/commit]"
  
steps_to_reproduce:
  1. "[Step 1]"
  2. "[Step 2]"
  3. "[Step 3]"

expected_result: "[What should happen]"
actual_result: "[What actually happens]"
frequency: "always | intermittent | once"
screenshots: "[links]"
logs: "[relevant log output]"

Severity Classification

LevelDefinitionSLAExample
P0 CriticalSystem down, data loss, security breachFix in 4 hoursPayment processing broken
P1 HighMajor feature broken, no workaroundFix in 24 hoursUsers can't login
P2 MediumFeature broken with workaroundFix this sprintSearch returns wrong results sometimes
P3 LowMinor issue, cosmeticFix when convenientButton alignment off by 2px

Bug Triage Process (Weekly)

1. Review all new bugs (unassigned)
2. For each bug:
   a. Reproduce — can you trigger it?
   b. Classify severity (P0-P3)
   c. Estimate fix effort (S/M/L)
   d. Assign to owner + sprint
   e. Link to related bugs/stories
3. Review P0/P1 bugs from last week — are they fixed?
4. Close bugs that can't be reproduced (after 2 attempts)
5. Update metrics dashboard

Bug Metrics Dashboard

Track weekly:

MetricFormulaTarget
Bug escape rateBugs found in prod / total bugs< 10%
Mean time to fix (P0)Avg hours from report to deploy< 8 hours
Mean time to fix (P1)Avg hours from report to deploy< 48 hours
Bug reopen rateReopened bugs / closed bugs< 5%
Test escape analysisBugs that SHOULD have been caughtTrack & reduce
Open bug countTotal open by severityTrending down

Phase 8: Release Readiness

Release Checklist

Before shipping to production:

Code Quality:

  • All unit tests passing
  • All integration tests passing
  • E2E smoke suite passing
  • No new lint warnings/errors
  • Code reviewed and approved
  • No known P0/P1 bugs open for this release

Coverage & Quality Gates:

  • Line coverage ≥ target (80%)
  • Branch coverage ≥ target (70%)
  • No coverage decrease from last release
  • Mutation testing score ≥ 60% (if applicable)

Performance:

  • Load test passed (within thresholds)
  • No performance regressions vs baseline
  • Bundle size within budget

Security:

  • Dependency audit clean (no critical/high CVEs)
  • Security checklist completed
  • Secrets rotated if needed

Operational Readiness:

  • Monitoring/alerts configured for new features
  • Rollback plan documented
  • Feature flags in place for risky changes
  • Database migration tested and reversible
  • Runbook updated

Release Readiness Score

Score 0-100 across 5 dimensions:

DimensionWeightScoring
Test coverage25%100 if targets met, -10 per gap area
Bug status25%100 if 0 P0/P1, -20 per open P0, -10 per P1
Performance20%100 if all green, -15 per yellow, -30 per red
Security20%100 if clean, -25 per critical, -15 per high
Operational10%100 if checklist complete, -20 per missing item

Ship threshold: ≥ 80 overall, no dimension below 60


Phase 9: CI/CD Quality Gates

Pipeline Quality Gates

Configure these gates in your CI pipeline:

# Quality gate configuration
gates:
  - name: "Lint"
    stage: pre-commit
    command: "npm run lint"
    blocking: true
    
  - name: "Unit Tests"
    stage: commit
    command: "npm test -- --coverage"
    blocking: true
    thresholds:
      pass_rate: 100%
      coverage_line: 80%
      coverage_branch: 70%
      
  - name: "Integration Tests"
    stage: merge
    command: "npm run test:integration"
    blocking: true
    thresholds:
      pass_rate: 100%
      
  - name: "Security Scan"
    stage: merge
    command: "npm audit --audit-level=high"
    blocking: true
    
  - name: "E2E Smoke"
    stage: staging
    command: "npm run test:e2e:smoke"
    blocking: true
    thresholds:
      pass_rate: 100%
      
  - name: "Performance"
    stage: staging
    command: "npm run test:perf"
    blocking: false  # Alert only
    thresholds:
      p95_regression: 20%

Test Automation Maturity Model

Rate your team 1-5:

LevelDescriptionCharacteristics
1 — ManualAll testing is manualNo automation, long release cycles
2 — ReactiveSome unit tests, no CITests written after bugs, not before
3 — StructuredTest pyramid, CI pipelineUnit + integration, automated on push
4 — ProactiveFull automation, quality gatesE2E + perf + security in pipeline, TDD
5 — OptimizedSelf-healing, predictiveFlaky auto-quarantine, AI-assisted testing, continuous deployment

Phase 10: Test Maintenance

Weekly Test Health Review

review_date: "[YYYY-MM-DD]"

metrics:
  total_tests: 0
  pass_rate_7d: "0%"
  flaky_tests: 0
  flaky_rate: "0%"
  avg_suite_duration: "0s"
  coverage_line: "0%"
  coverage_branch: "0%"
  
actions:
  quarantined: []     # Tests moved to flaky suite
  deleted: []         # Tests removed (obsolete/unfixable)
  fixed: []           # Flaky tests fixed this week
  added: []           # New tests added
  
trends:
  coverage_delta: "+0%"     # vs last week
  flaky_delta: "+0"         # vs last week
  duration_delta: "+0s"     # vs last week
  
notes: ""

Test Maintenance Rules

  1. No commented-out tests — delete or fix, never comment
  2. No skipped tests > 2 weeks — fix or remove
  3. No test duplication — each behavior tested once at the right level
  4. Test names must be readable — someone new should understand what broke
  5. Shared test utilities — common setup in fixtures/factories, not copy-pasted
  6. Test data isolation — each test creates its own data, cleans up after
  7. No magic numbers — use named constants in assertions
  8. Assertion messages — custom messages on complex assertions

Common Test Anti-Patterns

Anti-PatternProblemFix
Sleeping testssleep(2000) instead of waitingUse explicit waits/polling
Test interdependenceTest B relies on Test A's stateIsolate — each test sets up its own state
Assertionless testsTest runs code but doesn't assertAdd meaningful assertions
Brittle selectorsCSS selectors that break on redesignUse data-testid or aria roles
God testOne test verifying 20 thingsSplit into focused tests
Mock overloadEverything mocked, nothing real testedOnly mock external boundaries
Hardcoded dataTests break when seed data changesUse factories/builders
Ignoring test output"It passed, ship it"Review WHY it passed — is the assertion meaningful?

Quick Reference: Natural Language Commands

Tell the agent:

  • "Create test strategy for [feature]" → Generates strategy brief
  • "Write unit tests for [function/file]" → AAA-structured tests with edge cases
  • "Review test coverage for [module]" → Gap analysis + recommendations
  • "Write integration tests for [API endpoint]" → Full HTTP test suite
  • "Plan E2E tests for [user journey]" → E2E test template
  • "Run security checklist for [feature]" → OWASP-based security review
  • "Triage these bugs: [list]" → Severity classification + assignment
  • "Release readiness check" → Full readiness score + blockers
  • "Performance test plan for [endpoint]" → Load/stress test configuration
  • "Fix flaky test [name]" → Root cause analysis + fix strategy

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…