Nm Pensive Test Review

v1.0.0

Evaluate test suites for coverage gaps, quality issues, and TDD/BDD compliance

0· 50·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (test-suite review, TDD/BDD, coverage) align with the content: framework detection, coverage analysis, scenario-quality and remediation modules. Required config paths reference shared Night Market/imbue hooks used for evidence capture, which the SKILL.md explicitly calls out.
Instruction Scope
SKILL.md tells the agent to scan the repository, run test/coverage tools (pytest, cargo, jest, go test), run git/find commands, and reference other Night Market/imbue integrations for logging evidence. These actions are appropriate for test review, but several suggested commands (e.g., `cargo install`, installing coverage tools, running tests) will install or execute code on the host if the agent runs them — so runtime privilege should be granted consciously.
Install Mechanism
Instruction-only skill with no install spec and no bundled code. There is no automatic download/extract or third-party package installed by the skill itself; any installs would be performed only if the agent executes the provided commands.
Credentials
No environment variables or external credentials are required. The only declared dependencies are config paths (night-market.pensive:shared and night-market.imbue:proof-of-work), which match the SKILL.md references to imbue-based evidence capture and shared Night Market config.
Persistence & Privilege
always is false and the skill does not request persistent/global privileges. It references other Night Market/imbue hooks but does not modify other skills' configs or request system-wide changes in its instructions.
Assessment
This skill is coherent for repository test reviews: it scans files, runs test and coverage tools, and uses 'imbue:proof-of-work' to log evidence. Before running it, decide whether you want the agent to execute commands that may install tooling or run tests on your host (do so in a CI/sandbox if unsure). Ensure any external 'imbue' or Night Market proof-of-work endpoint/config is trusted before allowing evidence upload. If you prefer to avoid system changes, run the SKILL.md commands yourself or in an isolated environment and provide outputs to the agent instead.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧪 Clawdis
Confignight-market.pensive:shared, night-market.imbue:proof-of-work
latestvk97ccqq2ckzas23vp541022r1h84w4nx
50downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Test Review Workflow

Evaluate and improve test suites with TDD/BDD rigor.

Quick Start

/test-review

Verification: Run pytest -v to verify tests pass.

When To Use

  • Reviewing test suite quality
  • Analyzing coverage gaps
  • Before major releases
  • After test failures
  • Planning test improvements

When NOT To Use

  • Writing new tests - use parseltongue:python-testing
  • Updating existing tests - use sanctum:test-updates

Required TodoWrite Items

  1. test-review:languages-detected
  2. test-review:coverage-inventoried
  3. test-review:scenario-quality
  4. test-review:gap-remediation
  5. test-review:evidence-logged

Progressive Loading

Load modules as needed based on review depth:

  • Basic review: Core workflow (this file)
  • Framework detection: Load modules/framework-detection.md
  • Coverage analysis: Load modules/coverage-analysis.md
  • Quality assessment: Load modules/scenario-quality.md
  • Remediation planning: Load modules/remediation-planning.md

Workflow

Step 1: Detect Languages (test-review:languages-detected)

Identify testing frameworks and version constraints. → See: modules/framework-detection.md

Quick check:

find . -maxdepth 2 -name "Cargo.toml" -o -name "pyproject.toml" -o -name "package.json" -o -name "go.mod"

Verification: Run the command with --help flag to verify availability.

Step 2: Inventory Coverage (test-review:coverage-inventoried)

Run coverage tools and identify gaps. → See: modules/coverage-analysis.md

Quick check:

git diff --name-only | rg 'tests|spec|feature'

Verification: Run pytest -v to verify tests pass.

Step 3: Assess Scenario Quality (test-review:scenario-quality)

Evaluate test quality using BDD patterns and assertion checks. → See: modules/scenario-quality.md

Focus on:

  • Given/When/Then clarity
  • Assertion specificity
  • Anti-patterns (dead waits, mocking internals, repeated boilerplate)

Step 4: Plan Remediation (test-review:gap-remediation)

Create concrete improvement plan with owners and dates. → See: modules/remediation-planning.md

Step 5: Log Evidence (test-review:evidence-logged)

Record executed commands, outputs, and recommendations. → See: imbue:proof-of-work

Test Quality Checklist (Condensed)

  • Clear test structure (Arrange-Act-Assert)
  • Critical paths covered (auth, validation, errors)
  • Specific assertions with context
  • No flaky tests (dead waits, order dependencies)
  • Reusable fixtures/factories

Output Format

## Summary
[Brief assessment]

## Framework Detection
- Languages: [list] | Frameworks: [list] | Versions: [constraints]

## Coverage Analysis
- Overall: X% | Critical: X% | Gaps: [list]

## Quality Issues
[Q1] [Issue] - Location - Fix

## Remediation Plan
1. [Action] - Owner - Date

## Recommendation
Approve / Approve with actions / Block

Verification: Run the command with --help flag to verify availability.

Integration Notes

  • Use imbue:proof-of-work for reproducible evidence capture
  • Reference imbue:diff-analysis for risk assessment
  • Format output using imbue:structured-output patterns

Exit Criteria

  • Frameworks detected and documented
  • Coverage analyzed and gaps identified
  • Scenario quality assessed
  • Remediation plan created with owners and dates
  • Evidence logged with citations

Troubleshooting

Common Issues

Tests not discovered Ensure test files match pattern test_*.py or *_test.py. Run pytest --collect-only to verify.

Import errors Check that the module being tested is in PYTHONPATH or install with pip install -e .

Async tests failing Install pytest-asyncio and decorate test functions with @pytest.mark.asyncio

Comments

Loading comments...