Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nm Conserve Cpu Gpu Performance

v1.0.0

Establish CPU/GPU baselines before resource-intensive operations. Use for regression detection

0· 64·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description match the instructions: steps to capture utilization, profiling, throttling, and logging. However, the registry metadata declares a required config path (night-market.token-conservation) which is not referenced or justified by the instructions. A CPU/GPU baseline helper normally wouldn't need an external token/config; this is an inconsistency that should be explained by the author (integration with Night Market may be the reason).
Instruction Scope
SKILL.md only instructs local monitoring and profiling commands (uptime, ps, nvidia-smi, perf, nsys, etc.), scoping is limited to establishing baselines and managing resource usage. It does not instruct reading unrelated files, exfiltrating data, or calling external endpoints.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest-risk installation surface. Nothing is downloaded or written to disk by an installer.
!
Credentials
No env vars or binaries are required, but the single required config path (night-market.token-conservation) is potentially sensitive. The SKILL.md does not show any use of that config, so requesting it appears disproportionate to the stated purpose unless the skill integrates with a Night Market service. This should be justified: what is stored at that path, and why is it needed?
Persistence & Privilege
always is false and the skill is user-invocable; it does not request permanent presence or system-wide changes. The SKILL.md mentions auto-loading alongside token-conservation in usage guidance, but that is a usage suggestion, not an enforced persistent privilege.
What to consider before installing
The skill's instructions for benchmarking and throttling CPU/GPU usage look reasonable and limited to local monitoring. Before installing, ask the publisher to explain why the skill requires the 'night-market.token-conservation' config path: what data or token is stored there, and how the skill uses it? If that config contains sensitive tokens, review the Night Market plugin code (or avoid installing) to ensure the skill won't read or transmit secrets. Also verify that the host has the profilers/commands it references (nvidia-smi, perf, nsys) and that you are comfortable granting the skill autonomous invocation in case it gets invoked by other Night Market automation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.token-conservation
latestvk97avm6mvrbnyxbsg6rmvh02h584nrza
64downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/conserve. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

CPU/GPU Performance Discipline

When To Use

  • At the beginning of every session (auto-load alongside token-conservation).
  • Whenever you plan to build, train, or test anything that could pin CPU cores or GPUs for more than a minute.
  • Before retrying a failing command that previously consumed significant resources.

When NOT To Use

  • Simple operations with no resource impact
  • Quick single-file operations

Required TodoWrite Items

  1. cpu-gpu-performance:baseline
  2. cpu-gpu-performance:scope
  3. cpu-gpu-performance:instrument
  4. cpu-gpu-performance:throttle
  5. cpu-gpu-performance:log

Step 1: Establish Current Baseline

  • Capture current utilization:

    • uptime
    • ps -eo pcpu,cmd | head
    • nvidia-smi --query-gpu=utilization.gpu,memory.used --format=csv

    Note which hosts/GPUs are already busy.

  • Record any CI/cluster budgets (time quotas, GPU hours) before launching work.

  • Set a per-task CPU minute / GPU minute budget that respects those limits.

Step 2: Narrow the Scope

  • Avoid running "whole world" jobs after a small fix. Prefer diff-based or tag-based selective testing:
    • pytest -k
    • Bazel target patterns
    • cargo test <module>
  • Batch low-level fixes so you can validate multiple changes with a single targeted command.
  • For GPU jobs, favor unit-scale smoke inputs or lower epoch counts before scheduling the full training/eval sweep.

Step 3: Instrument Before You Optimize

  • Pick the right profiler/monitor:
    • CPU work:
      • perf
      • intel vtune
      • cargo flamegraph
      • language-specific profilers
    • GPU work:
      • nvidia-smi dmon
      • nsys
      • nvprof
      • DLProf
      • framework timeline tracers
  • Capture kernel/ops timelines, memory footprints, and data pipeline latency so you have evidence when throttling or parallelizing.
  • Record hot paths + I/O bottlenecks in notes so future reruns can jump straight to the culprit.

Step 4: Throttle and Sequence Work

  • Use nice, ionice, or Kubernetes/Slurm quotas to prevent starvation of shared nodes.
  • Chain heavy tasks with guardrails:
    • Rerun only the failed test/module
    • Then (optionally) escalate to the next-wider shard
    • Reserve the full suite for the final gate
  • Stagger GPU kernels (smaller batch sizes or gradient accumulation) when memory pressure risks eviction; prefer checkpoint/restore over restarts.

Step 5: Log Decisions and Next Steps

Conclude by documenting the commands that were run and their resource cost (duration, CPU%, GPU%), confirming whether they remained within the per-task budget. If a full suite or long training run was necessary, justify why selective or staged approaches were not feasible. Capture any follow-up tasks, such as adding a new test marker or profiling documentation, to simplify future sessions.

Output Expectations

  • Brief summary covering:
    • baseline metrics
    • scope chosen
    • instrumentation captured
    • throttling tactics
    • follow-up items
  • Concrete example(s) of what ran (e.g.):
    • "reran pytest tests/test_orders.py -k test_refund instead of pytest -m slow"
    • "profiled nvidia-smi dmon output to prove GPU idle time before scaling"

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Comments

Loading comments...