Nm Imbue Review Core

v1.0.0

Reusable scaffolding for review workflows with context establishment, evidence capture, and structured output

0· 61·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description promise a reusable review scaffold and the SKILL.md contains steps that directly implement that (establish context, inventory scope, capture evidence, structure deliverables). Commands and artifacts referenced (git, ls, rg, cargo, make, web.run citations) are what a reviewer would reasonably use.
Instruction Scope
Instructions explicitly direct the agent to run repository and environment-inspection commands and to log command outputs and snippets. This is expected for a review scaffold, but it means the agent will read repository contents and command outputs (which could include secrets or sensitive config) and may reference web.run for external citations. The skill does not instruct reading unrelated system paths, but the evidence-capture step is broad by design.
Install Mechanism
No install spec and no code files — the skill is instruction-only, so nothing is written to disk and there is minimal install risk.
Credentials
The skill declares no required environment variables or credentials and requests only standard repository inspection commands. However, because it tells the agent to capture and log command outputs and to run tools like cargo/rg, reviewers should be aware those commands can surface secrets or sensitive configuration if present in the repo; the SKILL.md does not request unrelated cloud credentials or secret tokens.
Persistence & Privilege
always is false and there is no install-time or persistent configuration requested. The skill does not request system-level privileges or attempt to modify other skills or global agent settings.
Assessment
This skill is coherent with its stated purpose: it guides the agent through repository inspection, evidence capture, and report structuring. Before enabling it, confirm what tools your agent is allowed to run and whether it may send captured outputs to external endpoints (e.g., via web.run or other network-capable tools). Because the workflow explicitly logs command outputs, avoid running it on repositories that contain secrets you don't want exposed, or ensure your agent/tooling redacts secrets and restricts outbound network access. If you need tighter control, consider editing the SKILL.md to explicitly exclude sensitive paths/commands or to add redaction steps.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
latestvk975axbf1rkkba5j63z3svwy1584q075
61downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/imbue. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Core Review Workflow

Table of Contents

  1. When to Use
  2. Activation Patterns
  3. Required TodoWrite Items
  4. Step 1 – Establish Context
  5. Step 2 – Inventory Scope
  6. Step 3 – Capture Evidence
  7. Step 4 – Structure Deliverables
  8. Step 5 – Contingency Plan
  9. Troubleshooting

When To Use

  • Use this skill at the beginning of any detailed review workflow (e.g., for architecture, math, or an API).
  • It provides a consistent structure for capturing context, logging evidence, and formatting the final report, which makes the findings of different reviews comparable.

When NOT To Use

  • Diff-focused analysis - use diff-analysis

Activation Patterns

Trigger Keywords: review, audit, analysis, assessment, evaluation, inspection Contextual Cues:

  • "review this code/design/architecture"
  • "conduct an audit of"
  • "analyze this for issues"
  • "evaluate the quality of"
  • "perform an assessment"

Auto-Load When: Any review-specific workflow is detected or when analysis methodologies are requested.

Required TodoWrite Items

  1. review-core:context-established
  2. review-core:scope-inventoried
  3. review-core:evidence-captured
  4. review-core:deliverables-structured
  5. review-core:contingencies-documented

Step 1 – Establish Context (review-core:context-established)

  • Confirm pwd, repo, branch, and upstream base (e.g., git status -sb, git rev-parse --abbrev-ref HEAD).
  • Note comparison target (merge base, release tag) so later diffs reference a concrete range.
  • Summarize the feature/bug/initiative under review plus stakeholders and deadlines.

Step 2 – Inventory Scope (review-core:scope-inventoried)

  • List relevant artifacts for this review: source files, configs, docs, specs, generated assets (OpenAPI, Makefiles, ADRs, notebooks, etc.).
  • Record how you enumerated them (commands like rg --files -g '*.mk', ls docs, cargo metadata).
  • Capture assumptions or constraints inherited from the plan/issue so the domain-specific analysis can cite them.

Step 3 – Capture Evidence (review-core:evidence-captured)

  • Log every command/output that informs the review (e.g., git diff --stat, make -pn, cargo doc, web.run citations). Keep snippets or line numbers for later reference.
  • Track open questions or variances found during preflight; if they block progress, record owners/timelines now.

Step 4 – Structure Deliverables (review-core:deliverables-structured)

  • Prepare the reporting skeleton shared by all reviews:
    • Summary (baseline, scope, recommendation)
    • Ordered findings (severity, file:line, principle violated, remediation)
    • Follow-up tasks (owner + due date)
    • Evidence appendix (commands, URLs, notebooks)
  • validate the domain-specific checklist will populate each section before concluding.

Step 5 – Contingency Plan (review-core:contingencies-documented)

  • If a required tool or skill is unavailable (e.g., web.run), document the alternative steps that will be taken and any limitations this introduces. This helps reviewers understand any gaps in coverage.
  • Note any outstanding approvals or data needed to complete the review.

Exit Criteria

  • All TodoWrite items complete with concrete notes (commands run, files listed, evidence paths).
  • Domain-specific review can now assume consistent context/evidence/deliverable scaffolding and focus on specialized analysis.

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Comments

Loading comments...