Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nm Abstract Skills Eval

v1.8.3

Evaluate and improve Claude skill quality through auditing

0· 87·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (skills evaluation and improvement) align with the contents: the SKILL.md and modules are focused on auditing, scoring, and optimizing skills. The declared required config paths (night-market.modular-skills, night-market.performance-optimization) are consistent with a Night Market integration.
!
Instruction Scope
SKILL.md repeatedly instructs running CLI commands and Python scripts (make, skill_analyzer.py, token_estimator.py, scripts/*, integration-tester, etc.) and shows example code that reads skill files and runs subprocesses. However, this skill bundle contains only markdown modules (no scripts or executables). That mismatch means the runtime instructions expect tools/resources that are not delivered here; running such examples would execute subprocesses and could run arbitrary binaries if present on the host.
Install Mechanism
No install spec (instruction-only) which reduces installation risk. However the content references a scripts/ directory and CLI tools that are not present in the provided manifest — users should confirm those scripts exist in the upstream repo before executing any commands.
Credentials
The skill declares no environment variables, no credentials, and does not request system-wide secrets. The only external requirements are specific configuration paths for Night Market; ensure those config keys do not contain unrelated secrets before granting the skill access to them.
Persistence & Privilege
always:false and default invocation behavior. The skill does not request permanent/privileged presence and does not appear to modify other skills' configurations in its docs. If you allow autonomous invocation, standard platform cautions apply but there is no additional privilege requested here.
What to consider before installing
This is a documentation-first auditing skill (no code files included) that describes many scripts and command-line tools but does not bundle them. Before installing or running it: 1) verify you have the referenced claude-night-market repository or the Claude Code plugin that provides the scripts; 2) inspect any external scripts (scripts/, integration-tester, compliance-checker, etc.) before running — examples use subprocess execution which can run arbitrary binaries; 3) check the Night Market config paths (night-market.modular-skills, night-market.performance-optimization) to ensure they don't expose unrelated secrets; and 4) if you enable autonomous agent invocation, be cautious — an agent following these docs could execute local tools or run commands if those tools are present. If you need certainty, ask the maintainer for the associated scripts or a full package that includes the executable tooling before proceeding.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.modular-skills, night-market.performance-optimization
latestvk972wrb3195eb0hy76xatx2st184kafv
87downloads
0stars
3versions
Updated 1w ago
v1.8.3
MIT-0

Night Market Skill — ported from claude-night-market/abstract. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Skills Evaluation and Improvement

Table of Contents

  1. Overview
  2. Quick Start
  3. Evaluation Workflow
  4. Evaluation and Optimization
  5. Resources

Overview

This framework audits Claude skills against quality standards to improve performance and reduce token consumption. Automated tools analyze skill structure, measure context usage, and identify specific technical improvements. Run verification commands after each audit to confirm fixes work correctly.

The skills-auditor provides structural analysis, while the improvement-suggester ranks fixes by impact. Compliance is verified through the compliance-checker. Runtime efficiency is monitored by tool-performance-analyzer and token-usage-tracker.

Quick Start

Basic Audit

Run a full audit of all skills or target a specific file to identify structural issues.

# Audit all skills
make audit-all

# Audit specific skill
make audit-skill TARGET=path/to/skill/SKILL.md

Analysis and Optimization

Use skill_analyzer.py for complexity checks and token_estimator.py to verify the context budget.

make analyze-skill TARGET=path/to/skill/SKILL.md
make estimate-tokens TARGET=path/to/skill/SKILL.md

Improvements

Generate a prioritized plan and verify standards compliance using improvement_suggester.py and compliance_checker.py.

make improve-skill TARGET=path/to/skill/SKILL.md
make check-compliance TARGET=path/to/skill/SKILL.md

Evaluation Workflow

Start with make audit-all to inventory skills and identify high-priority targets. For each skill requiring attention, run analysis with analyze-skill to map complexity. Generate an improvement plan, apply fixes, and run check-compliance to verify the skill meets project standards. Finalize by checking the token budget for efficiency.

Evaluation and Optimization

Quality assessments use the skills-auditor and improvement-suggester to generate detailed reports. Performance analysis focuses on token efficiency through the token-usage-tracker and tool performance via tool-performance-analyzer. For standards compliance, the compliance-checker automates common fixes for structural issues.

Scoring and Prioritization

We evaluate skills across five dimensions: structure compliance, content quality, token efficiency, activation reliability, and tool integration. Scores above 90 represent production-ready skills, while scores below 50 indicate critical issues requiring immediate attention.

Improvements are prioritized by impact. Critical issues include security vulnerabilities or broken functionality. High-priority items cover structural flaws that hinder discoverability. Medium and low priorities focus on best practices and minor optimizations.

Structural Patterns

Deprecated: skills/shared/modules/ directories. Shared modules must be relocated into the consuming skill's own modules/ directory. The evaluator flags any remaining skills/shared/ as a structural warning.

Current: Each skill owns its modules at skills/<skill-name>/modules/. Cross-skill references use relative paths (e.g., ../skill-authoring/modules/anti-rationalization.md).

Resources

Shared Modules: Cross-Skill Patterns

Skill-Specific Modules

  • Trigger Isolation Analysis: See modules/trigger-isolation-analysis.md
  • Skill Authoring Best Practices: See modules/skill-authoring-best-practices.md
  • Authoring Checklist: See modules/authoring-checklist.md
  • Evaluation Workflows: See modules/evaluation-workflows.md
  • Quality Metrics: See modules/quality-metrics.md
  • Advanced Tool Use Analysis: See modules/advanced-tool-use-analysis.md
  • Evaluation Framework: See modules/evaluation-framework.md
  • Integration Patterns: See modules/integration.md
  • Troubleshooting: See modules/troubleshooting.md
  • Pressure Testing: See modules/pressure-testing.md
  • Integration Testing: See modules/integration-testing.md
  • Multi-Metric Evaluation: See modules/multi-metric-evaluation-methodology.md
  • Performance Benchmarking: See modules/performance-benchmarking.md

Tools and Automation

  • Tools: Executable analysis utilities in scripts/ directory.
  • Automation: Setup and validation scripts in scripts/automation/.

Comments

Loading comments...