Nm Imbue Proof Of Work

v1.0.0

Enforce validation and evidence before declaring work complete. Use for acceptance criteria and done gates

0· 63·1 current·1 all-time
Security Scan
Capability signals
Requires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (enforce validation and evidence) match the contents: SKILL.md and modules consistently define evidence capture, validation protocols, acceptance criteria and test-driven workflows. The requested operations (checking processes, files, env vars, running smoke tests) are reasonable for this purpose.
Instruction Scope
Instructions tell the agent to run local verification commands (ps, cat, which, curl, git, etc.), check config files and environment variables, and capture outputs as evidence. That scope aligns with proof-of-work, but be aware the skill explicitly directs reading local files and environment state and will capture command outputs — potentially exposing secrets or sensitive config if present.
Install Mechanism
No install spec and no code files — instruction-only skill. This is low-risk from an installation perspective because nothing is downloaded or written by the skill itself.
Credentials
The skill declares no required env vars or credentials (appropriate), yet the guidance references reading environment variables and checking files (e.g., ENABLE_LSP_TOOL, PATH, .mcp.json). This is coherent for validation purposes, but the SKILL.md does access environment/context that is not declared as required — not a credential exfiltration request, but you should expect the agent to capture environment/config outputs when running the skill.
Persistence & Privilege
always:false and no system config or other skills' config paths requested. Modules recommend preserving proof:* evidence items as an audit trail; the skill does not request elevated persistence or modify other skills. Autonomous invocation is enabled (platform default) but not exceptional here.
Assessment
This skill is an instruction-only checklist/framework for producing and capturing verification evidence — it does not install code or ask for secrets. However, when used an agent will be instructed to run local commands and read files and environment variables; the captured evidence may include sensitive data (process lists, config files, env vars, logs, tokens present in files, etc.). Before installing or running: (1) run the skill in a safe/sandboxed environment if you are concerned about leaking secrets; (2) review and sanitize any files or environment variables you don't want captured; (3) be aware the modules say proof:* evidence should be retained as audit trail (so logs may persist); (4) if you do not want the agent to run autonomously, restrict invocation or require user confirmation when the agent attempts to execute verification steps. If you want, I can list exact commands the SKILL.md suggests the agent will run so you can assess what outputs might be captured.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
latestvk97bf1y2mb20tcpvdm5vqwkrps84pnjn
63downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/imbue. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Claims without evidence fail the people who depend on your work. Proof-of-work is humility in practice: "it looks correct" is not "I verified it works."

Proof of Work

Table of Contents

Overview

The "Proof of Work" methodology prevents premature completion claims by requiring technical verification before stating that a task is finished. For example, instead of assuming an LSP configuration functions after a restart, we verify that the server starts and that tools respond to queries. This approach confirms the solution works before the user attempts validation.

Before claiming completion, provide reproducible evidence of the solution's performance and address edge cases. All claims must be backed by actual command output captured in the current environment.

The Iron Law

NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST NO COMPLETION CLAIM WITHOUT EVIDENCE FIRST NO CODE WITHOUT UNDERSTANDING FIRST

The Iron Law prevents testing from becoming a perfunctory exercise. If an implementation is planned before tests are written, the RED phase fails to drive the design. Understand the technical rationale for an approach and its limitations before declaring it done. Before writing code, document evidence of the failure being addressed and confirm that tests are driving the implementation.

Verification and TDD Workflow

Verify the fundamentals of the implementation and the reasons for choosing it over alternatives. Identify where a solution might fail rather than stating it should always work. The TDD cycle follows these mandatory steps:

  1. RED: Write a failing test before implementation.
  2. GREEN: Create a minimal implementation that passes the test.
  3. REFACTOR: Improve the code without changing its behavior.

Iron Law Self-Check

Self-Check QuestionIf Answer Is WrongAction
Do I have documented evidence of failure/need?NoSTOP - document failure first
Am I testing pre-conceived implementation?YesSTOP - let test DRIVE design
Am I feeling design uncertainty?NoSTOP - uncertainty is GOOD
Did test drive implementation?NoSTOP - doing it backwards

Iron Law Progress Tracking

  • proof:iron-law-red: Failing test written before implementation.
  • proof:iron-law-green: Minimal implementation passes test.
  • proof:iron-law-refactor: Code improved without behavior change.
  • proof:iron-law-coverage: Coverage gates passed (line, branch, and mutation).

Confirm that work passes all line, branch, and mutation coverage gates. For detailed enforcement patterns, see iron-law-enforcement.md.

Usage Standards

Apply this skill before stating that work is "done," "finished," or "ready." Use it before recommending solutions or stating that a configuration "should work." Stop if you find yourself assuming a configuration is correct without testing it or recommending a fix without first reproducing the problem. Red flags include thinking "this looks correct" without actual verification. If you cannot explain each line of a configuration or why a specific practice applies to the current context, the necessary validation steps have been skipped.

Validation Protocol

Step 1: Reproduce the Problem (proof:problem-reproduced)

Before proposing a solution, verify the current state. Use tools like ps, echo, and cat to check running processes, environment variables, and configuration files. Document the failure with command output and error logs.

Step 2: Test the Solution (proof:solution-tested)

Before claiming a solution works, execute it in the current environment. Capture the actual output and confirm that it matches expected behavior. Do not rely on assumed output.

Step 3: Check for Known Issues (proof:edge-cases-checked)

Research known bugs and limitations related to the approach. Check GitHub issues, version compatibility, and official documentation to identify potential blockers or common pitfalls.

Step 4: Capture Evidence (proof:evidence-captured)

Use imbue:proof-of-work to document the commands executed, their output, timestamps, and the conclusions drawn from each step.

Step 5: Prove Completion (proof:completion-proven)

Define acceptance criteria and validate each item. If a blocker is identified, document the diagnosis with evidence and provide workaround options instead of claiming completion.

Integration

With Improvement Workflows

Use proof-of-work to validate improvement opportunities identified by /update-plugins or /fix-workflow. Document the baseline metrics (step count, failure rate, duration), test the proposed changes, and capture the improved metrics to demonstrate quantitative impact.

Validation Checklist (Before Claiming "Done")

Verify that the problem was reproduced with evidence and the solution was tested in the actual environment. Research known issues and consider edge cases. Capture evidence in a reproducible format and confirm that all acceptance criteria are met. The completion statement must detail the specific tests run and their results, citing evidence for each claim.

Red Flag Self-Check

Before sending a completion message, confirm that you have run the recommended commands and captured their output. Verify that you have researched known issues and that the validation steps are reproducible by the user. Ensure you are proving rather than assuming.

Supporting Modules

Exit Criteria

Complete all progress tracking items. Create an evidence log with reproducible proofs. Define and validate acceptance criteria, and document any identified blockers.

Comments

Loading comments...