Nm Pensive Bug Review

v1.0.0

Systematic bug hunting by detecting languages, planning reproduction, documenting defects, preparing minimal fixes, and verifying with evidence-based workflows.

0· 46·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description (systematic bug hunting, repro, fixes, verification) match the SKILL.md content. The commands and behaviors (language detection, running linters/tests, drafting patches) are expected for a bug-review skill.
Instruction Scope
Instructions direct the agent to scan repository files, reference exact file:line locations, run local tooling (cargo, pytest, eslint, golangci-lint, etc.), and produce patches and test artifacts. This is expected for code review, but it means the agent will read arbitrary project files and may reference environment variables in examples. Also the SKILL.md metadata references other Night Market/imbue config modules (e.g., night-market.imbue:proof-of-work) although the registry lists no required config — this mismatch is noteworthy but not obviously malicious.
Install Mechanism
Instruction-only skill with no install spec and no code files; nothing will be downloaded or written by an installer. Lowest install risk.
Credentials
The registry declares no required environment variables or credentials, which aligns with an instruction-only review skill. However, examples and idioms in the modules reference environment variables (e.g., CONFIG_PATH) and the SKILL.md metadata names other config modules; those are examples/optional and not requested, so verify before granting the agent access to any sensitive env or system paths.
Persistence & Privilege
Skill is not always-included and can be invoked by user; no elevated persistence or cross-skill configuration changes are requested in the instructions.
Assessment
This is a coherent, instruction-only bug-review skill that will ask the agent to read repository files and run local linters/tests. Before using it: 1) run it in a safe/dev workspace (not on production hosts or directories containing secrets); 2) ensure the required tooling (cargo, pytest, npm, golangci-lint, eslint, etc.) is installed and that you trust running tests/linters in your environment; 3) be aware the SKILL.md references other Night Market/imbue modules in metadata (informational) even though the registry lists no required config — check if you need those integrations; and 4) if you want to limit data exposure, avoid letting the agent upload logs or artifacts externally and review any patches/tests it proposes before applying them.

Like a lobster shell, security has layers — review code before you run it.

latestvk970ctrh52srxqpnn3f9n83xvh84wfe8
46downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Bug Review Workflow

Systematic bug identification and fixing with language-specific expertise.

Quick Start

/bug-review

Verification: Run the command with --help flag to verify availability.

When To Use

  • Reviewing code for potential bugs
  • After receiving bug reports
  • Before major releases
  • During security audits
  • Investigating production issues

When NOT To Use

  • Test coverage audit - use test-review instead

Required TodoWrite Items

  1. bug-review:language-detected
  2. bug-review:repro-plan
  3. bug-review:defects-documented
  4. bug-review:fixes-prepared
  5. bug-review:verification-plan

Progressive Loading

Load additional context as needed:

  • Language Detection: @include modules/language-detection.md - Manifest heuristics, expertise framing, version constraints
  • Defect Documentation: @include modules/defect-documentation.md - Severity classification, root cause analysis, static analyzers
  • Fix Preparation: @include modules/fix-preparation.md - Minimal patches, idiomatic patterns, test coverage

Workflow

Step 1: Detect Languages (bug-review:language-detected)

Identify dominant languages using manifest files (Cargo.toml → Rust, package.json → Node, etc.).

State expertise persona appropriate for the language ecosystem.

Note version constraints (MSRV, Python versions, Node engines).

Progressive: Load modules/language-detection.md for detailed manifest heuristics.

Step 2: Plan Reproduction (bug-review:repro-plan)

Identify reproduction methods:

  • Unit/integration test suites
  • Fuzzing tools
  • Manual reproduction commands

Document exact commands:

cargo test -p core
pytest tests/test_api.py
npm test -- pkg

Verification: Run pytest -v tests/test_api.py to verify.

Capture blockers and propose mocks when dependencies unavailable.

Step 3: Document Defects (bug-review:defects-documented)

Review code line-by-line, logging each bug with:

  • File:line reference: Precise location
  • Severity: Critical, High, Medium, Low
  • Root cause: Logic error, API misuse, concurrency, resource leak
  • Impact: What breaks and how

Run static analyzers (cargo clippy, ruff check, golangci-lint, eslint).

Use imbue:proof-of-work for reproducible capture.

Progressive: Load modules/defect-documentation.md for classification details and analyzer commands.

Step 4: Prepare Fixes (bug-review:fixes-prepared)

Draft minimal, idiomatic patches using language best practices:

  • Guard clauses (Rust: pattern matching, Python: early returns)
  • Resource cleanup (Go: defer, Python: context managers)
  • Error propagation (Rust: ?, Go: wrapped errors)

Create tests following Red → Green pattern:

  1. Write failing test
  2. Apply minimal fix
  3. Verify test passes

Progressive: Load modules/fix-preparation.md for language-specific patterns and test strategies.

Step 5: Verification Plan (bug-review:verification-plan)

Execute reproduction steps with fixes applied.

Capture evidence:

  • Test output logs
  • Benchmark comparisons
  • Coverage reports

Document remaining risks using imbue:diff-analysis/modules/risk-assessment-framework.

Assign owners and deadlines for follow-up items.

Defect Classification (Condensed)

Severity: Critical (crash/data loss) → High (broken features) → Medium (degraded UX) → Low (edge cases)

Root Causes: Logic errors | API misuse | Concurrency issues | Resource leaks | Validation gaps

Output Format

## Summary
[Brief scope description]

## Defects Found
### [D1] file.rs:142 - Title
- Severity: High
- Root Cause: Logic error
- Impact: Data corruption possible
- Fix: [description]

## Proposed Fixes
### Fix for D1
[code diff with explanation]

## Test Updates
[new/updated tests with Red → Green verification]

## Evidence
- Commands executed
- Logs and outputs
- External references

Verification: Run pytest -v to verify tests pass.

Best Practices

  1. Evidence-based: Every finding has file:line reference
  2. Reproducible: Clear steps to reproduce each bug
  3. Minimal fixes: Smallest change that fixes the issue
  4. Test coverage: Every fix has corresponding test
  5. Risk awareness: Document remaining risks with severity scoring

Exit Criteria

  • All defects documented with precise references
  • Fixes prepared with test coverage verified
  • Verification plan includes commands and expected outputs
  • Remaining risks assessed and owners assigned

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Comments

Loading comments...