Nm Pensive Math Review

v1.0.0

Verify math-heavy code for algorithm correctness, numerical stability, and standards alignment

0· 47·1 current·1 all-time
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the contents: the skill is an instruction-only math/algorithm review that walks through context-sync, requirements mapping, symbolic derivation checks, numerical-stability checks, and test execution. No unrelated credentials or unrelated binaries are requested. The two declared config paths (night-market.pensive:shared, night-market.imbue:proof-of-work) plausibly belong to the Night Market ecosystem; their presence is not obviously inconsistent with a plugin that integrates with that system.
Instruction Scope
Instructions explicitly tell the agent to run repository commands (git status/diff), execute tests (pytest tests/math/ --benchmark) and execute notebooks (jupyter nbconvert --execute derivation.ipynb). That is coherent for a code-review / verification skill, but executing tests or notebooks in an unchecked environment can run arbitrary code from the repository (including network access or side effects). The SKILL.md does not include any sandboxing or explicit restrictions, nor does it list required numerical packages (SymPy, NumPy, Jupyter) even though examples reference them.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is downloaded or written by the skill itself. This is the lowest-risk install model and matches the declared manifest.
Credentials
The skill declares no required environment variables and does not request broad cloud credentials. It does require two Night Market config paths (night-market.pensive:shared, night-market.imbue:proof-of-work). The manifest does not show what these configs contain; they could be harmless settings or could hold tokens/credentials. Because the skill executes repo tests/notebooks, it could indirectly access secrets present in the repo or environment when run — consider limiting its access or inspecting those config entries before installing.
Persistence & Privilege
always:false and user-invocable:true (default) — no forced inclusion. The skill does not declare any behavior that modifies other skills or system-wide agent settings. Autonomous invocation is permitted by platform default but is not combined with other high-risk privileges in this skill.
Assessment
This skill appears to do what it says: guide a human/agent through math-heavy code review and run tests/notebooks to gather evidence. Before installing or running it: 1) Inspect the two Night Market config entries referenced (night-market.pensive:shared and night-market.imbue:proof-of-work) to ensure they don't contain secrets or tokens you don't want exposed. 2) Do not execute the skill against untrusted repositories without sandboxing — pytest and executing Jupyter notebooks can run arbitrary code, access the network, or exfiltrate data. 3) Ensure the execution environment has the needed numerical tools (SymPy, NumPy, SciPy, Jupyter) pinned and isolated (e.g., container/VM). 4) If you plan to allow autonomous invocation by an agent, restrict its runtime privileges (network, filesystem scope) or require manual approval for executing tests/notebooks. If you can confirm the config entries are safe and you will run the skill in an isolated environment, the skill is coherent with its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.pensive:shared, night-market.imbue:proof-of-work
latestvk97ck43z5t76baf8826rsnr2j984xn0d
47downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Mathematical Algorithm Review

Intensive analysis ensuring numerical stability and alignment with standards.

Quick Start

/math-review

Verification: Run the command with --help flag to verify availability.

When To Use

  • Changes to mathematical models or algorithms
  • Statistical routines or probabilistic logic
  • Numerical integration or optimization
  • Scientific computing code
  • ML/AI model implementations
  • Safety-critical calculations

When NOT To Use

  • General algorithm review - use architecture-review
  • Performance optimization - use parseltongue:python-performance
  • General algorithm review - use architecture-review
  • Performance optimization - use parseltongue:python-performance

Required TodoWrite Items

  1. math-review:context-synced
  2. math-review:requirements-mapped
  3. math-review:derivations-verified
  4. math-review:stability-assessed
  5. math-review:evidence-logged

Core Workflow

1. Context Sync

pwd && git status -sb && git diff --stat origin/main..HEAD

Verification: Run git status to confirm working tree state. Enumerate math-heavy files (source, tests, docs, notebooks). Classify risk: safety-critical, financial, ML fairness.

2. Requirements Mapping

Translate requirements → mathematical invariants. Document pre/post conditions, conservation laws, bounds. Load: modules/requirements-mapping.md

3. Derivation Verification

Re-derive formulas using CAS. Challenge approximations. Cite authoritative standards (NASA-STD-7009, ASME VVUQ). Load: modules/derivation-verification.md

4. Stability Assessment

Evaluate conditioning, precision, scaling, randomness. Compare complexity. Quantify uncertainty. Load: modules/numerical-stability.md

5. Proof of Work

pytest tests/math/ --benchmark
jupyter nbconvert --execute derivation.ipynb

Verification: Run pytest -v tests/math/ to verify. Log deviations, recommend: Approve / Approve with actions / Block. Load: modules/testing-strategies.md

Progressive Loading

Default (200 tokens): Core workflow, checklists +Requirements (+300 tokens): Invariants, pre/post conditions, coverage analysis +Derivation (+350 tokens): CAS verification, standards, citations +Stability (+400 tokens): Numerical properties, precision, complexity +Testing (+350 tokens): Edge cases, benchmarks, reproducibility

Total with all modules: ~1600 tokens

Essential Checklist

Correctness: Formulas match spec | Edge cases handled | Units consistent | Domain enforced Stability: Condition number OK | Precision sufficient | No cancellation | Overflow prevented Verification: Derivations documented | References cited | Tests cover invariants | Benchmarks reproducible Documentation: Assumptions stated | Limitations documented | Error bounds specified | References linked

Output Format

## Summary
[Brief findings]

## Context
Files | Risk classification | Standards

## Requirements Analysis
| Invariant | Verified | Evidence |

## Derivation Review
[Status and conflicts]

## Stability Analysis
Condition number | Precision | Risks

## Issues
[M1] [Title]: Location | Issue | Fix

## Recommendation
Approve / Approve with actions / Block

Verification: Run the command with --help flag to verify availability.

Exit Criteria

  • Context synced, requirements mapped, derivations verified, stability assessed, evidence logged with citations

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Comments

Loading comments...