Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nm Pensive Unified Review

v1.0.0

Orchestrate multiple review types into a single multi-domain review with integrated reporting

0· 39·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (unified review orchestration) align with the declared needs: shared templates, proof-of-work (evidence capture), and structured-output configs in the night-market namespace. No unrelated binaries or credentials are required.
Instruction Scope
Runtime instructions legitimately require reading repository files, analyzing git diffs, running language detection, invoking other review skills, and assembling evidence. The SKILL.md also suggests running tests (pytest) and running repository scripts (example: python3 scripts/deferred_capture.py). Executing repo code is coherent for an active review workflow but carries execution risk — users should verify or sandbox any scripts the skill runs.
Install Mechanism
Instruction-only skill with no install spec and no code files to write to disk; this is the lowest-risk installation model.
Credentials
No environment variables or external credentials are requested. The three required config paths are internal night-market configuration keys used for templates, evidence capture, and structured output and are proportional to the skill's purpose.
Persistence & Privilege
always is false (no forced inclusion). The skill expects normal autonomous invocation behavior but does not request persistent system-level privileges or changes to other skills' configurations.
Assessment
This skill appears coherent for orchestrating multi-domain code reviews, but it will read your repository and may run tests or repository scripts. Before installing or invoking it: (1) run it in a sandbox or CI worker rather than on a sensitive local machine; (2) inspect any repo scripts it calls (e.g., scripts/deferred_capture.py) before allowing execution; (3) ensure your repo does not contain secrets that could be captured in evidence output; (4) if you want to limit blast radius, disable autonomous invocation or only invoke the skill manually; and (5) verify the night-market config keys it expects exist and are appropriate for your environment.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.pensive:shared, night-market.imbue:proof-of-work, night-market.imbue:structured-output
latestvk9724n9hac2z0nyqqp8aq5r73984wn58
39downloads
0stars
1versions
Updated 4d ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Unified Review Orchestration

Intelligently selects and executes appropriate review skills based on codebase analysis and context.

Quick Start

# Auto-detect and run appropriate reviews
/full-review

# Focus on specific areas
/full-review api          # API surface review
/full-review architecture # Architecture review
/full-review bugs         # Bug hunting
/full-review tests        # Test suite review
/full-review all          # Run all applicable skills

Verification: Run pytest -v to verify tests pass.

When To Use

  • Starting a full code review
  • Reviewing changes across multiple domains
  • Need intelligent selection of review skills
  • Want integrated reporting from multiple review types
  • Before merging major feature branches

When NOT To Use

  • Specific review type known
    • use bug-review
  • Test-review
  • Architecture-only focus - use architecture-review
  • Specific review type known
    • use bug-review

Review Skill Selection Matrix

Codebase PatternReview SkillsTriggers
Rust files (*.rs, Cargo.toml)rust-review, bug-review, api-reviewRust project detected
API changes (openapi.yaml, routes/)api-review, architecture-reviewPublic API surfaces
Test files (test_*.py, *_test.go)test-review, bug-reviewTest infrastructure
Makefile/build systemmakefile-review, architecture-reviewBuild complexity
Mathematical algorithmsmath-review, bug-reviewNumerical computation
Architecture docs/ADRsarchitecture-review, api-reviewSystem design
General code qualitybug-review, test-reviewDefault review

Workflow

1. Analyze Repository Context

  • Detect primary languages from extensions and manifests
  • Analyze git status and diffs for change scope
  • Identify project structure (monorepo, microservices, library)
  • Detect build systems, testing frameworks, documentation

2. Select Review Skills

# Detection logic
if has_rust_files():
    schedule_skill("rust-review")
if has_api_changes():
    schedule_skill("api-review")
if has_test_files():
    schedule_skill("test-review")
if has_makefiles():
    schedule_skill("makefile-review")
if has_math_code():
    schedule_skill("math-review")
if has_architecture_changes():
    schedule_skill("architecture-review")
# Default
schedule_skill("bug-review")

Verification: Run pytest -v to verify tests pass.

3. Execute Reviews

Dispatch selected skills concurrently via the Agent tool. Use this mapping to resolve skill names to agent types:

Skill NameAgent TypeNotes
bug-reviewpensive:code-reviewerCovers bugs, API, tests
api-reviewpensive:code-reviewerSame agent, API focus
test-reviewpensive:code-reviewerSame agent, test focus
architecture-reviewpensive:architecture-reviewerADR compliance
rust-reviewpensive:rust-auditorRust-specific
code-refinementpensive:code-refinerDuplication, quality
math-reviewgeneral-purposePrompt: invoke Skill(pensive:math-review)
makefile-reviewgeneral-purposePrompt: invoke Skill(pensive:makefile-review)
shell-reviewgeneral-purposePrompt: invoke Skill(pensive:shell-review)

Rules:

  • Never use skill names as agent types (e.g., pensive:math-review is NOT an agent)
  • When pensive:code-reviewer covers multiple domains, dispatch once with combined scope
  • For skills without dedicated agents, use general-purpose and instruct it to invoke the Skill tool
  • Maintain consistent evidence logging across all agents
  • Track progress via TodoWrite

4. Integrate Findings

  • Consolidate findings across domains
  • Identify cross-domain patterns
  • Prioritize by impact and effort
  • Generate unified action plan

Deferred capture for backlog findings: Findings that are triaged to the backlog (out-of-scope for the current review or deferred by the team) should be preserved so they are not lost between review cycles. For each finding assigned to the backlog, run:

python3 scripts/deferred_capture.py \
  --title "<finding title>" \
  --source review \
  --context "Review dimension: <dimension>. <finding description>"

The <dimension> value should match the review skill that surfaced the finding (e.g. bug-review, api-review, architecture-review). This runs automatically after the action plan is finalised, without prompting the user.

Review Modes

Auto-Detect (default)

Automatically selects skills based on codebase analysis.

Focused Mode

Run specific review domains:

  • /full-review api → api-review only
  • /full-review architecture → architecture-review only
  • /full-review bugs → bug-review only
  • /full-review tests → test-review only

Full Review Mode

Run all applicable review skills:

  • /full-review all → Execute all detected skills

Quality Gates

Each review must:

  1. Establish proper context
  2. Execute all selected skills successfully
  3. Document findings with evidence
  4. Prioritize recommendations by impact
  5. Create action plan with owners

Deliverables

Executive Summary

  • Overall codebase health assessment
  • Critical issues requiring immediate attention
  • Review frequency recommendations

Domain-Specific Reports

  • API surface analysis and consistency
  • Architecture alignment with ADRs
  • Test coverage gaps and improvements
  • Bug analysis and security findings
  • Performance and maintainability recommendations

Integrated Action Plan

  • Prioritized remediation tasks
  • Cross-domain dependencies
  • Assigned owners and target dates
  • Follow-up review schedule

Modular Architecture

All review skills use a hub-and-spoke architecture with progressive loading:

  • pensive:shared: Common workflow, output templates, quality checklists
  • Each skill has modules/: Domain-specific details loaded on demand
  • Cross-plugin deps: imbue:proof-of-work, imbue:diff-analysis/modules/risk-assessment-framework

This reduces token usage by 50-70% for focused reviews while maintaining full capabilities.

Exit Criteria

  • All selected review skills executed
  • Findings consolidated and prioritized
  • Action plan created with ownership
  • Evidence logged per structured output format

Supporting Modules

Troubleshooting

Common Issues

If the auto-detection fails to identify the correct review skills, explicitly specify the mode (e.g., /full-review rust instead of just /full-review). If integration fails, check that TodoWrite logs are accessible and that evidence files were correctly written by the individual skills.

Comments

Loading comments...