Nm Abstract Hooks Eval

v1.8.3

Evaluate hook security, performance, and SDK compliance. Use for audits

0· 130·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and included modules are consistent: the skill provides guidance and rubrics for auditing hooks, references SDK types and evaluation criteria, and only declares a related config path (night-market.hook-scope-guide). There are no unrelated env vars, binaries, or install steps requested.
Instruction Scope
SKILL.md is detailed and stays within the audit/evaluation domain (checking hooks.json, validating matchers, benchmarking, scanning for secrets). It instructs running helper commands (e.g., /hooks-eval, /analyze-hook, /validate-plugin) but supplies no binaries — so the runtime assumes those tools or the Claude Code plugin exist in the agent environment. The instructions also direct verifying shell commands referenced in hooks.json are executable, which legitimately requires inspecting plugin files and possibly executing or testing commands locally; this is within scope but means the agent will need filesystem and command-execution capability to fully follow the guidance.
Install Mechanism
No install spec is provided (instruction-only). This is the lowest-risk install model: nothing is downloaded or written to disk by the skill itself.
Credentials
The skill requests no credentials or environment variables; the single required config path (night-market.hook-scope-guide) is relevant to hook placement guidance. The included evaluation rules explicitly search for hardcoded secrets as part of audits, which is appropriate for an auditing tool and not an unexplained request for secrets.
Persistence & Privilege
always:false and default invocation settings are used. The skill does not request persistent system presence or claim to modify other skills' configurations. Nothing indicates elevated or unusual privileges.
Assessment
This skill is an instruction-only audit framework and appears internally consistent. Before installing or using it: 1) Confirm your agent environment actually provides the referenced tooling (e.g., /hooks-eval or install the Claude Code plugin) because the SKILL.md assumes those commands exist. 2) When following checks that validate or run shell commands from hooks.json, review those commands first—the evaluator may instruct you to test executables referenced by a plugin, which requires filesystem and command execution access. 3) The skill will scan for hardcoded secrets and other vulnerabilities in plugin files—ensure you permit it only to access plugin folders you trust. 4) There are no requested credentials, but still inspect any plugin code you evaluate for secrets before running automated checks. If you need higher assurance, request a sample run or a minimal-scope dry-run on non-production data to confirm behavior.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.hook-scope-guide
latestvk97874bp93pkj8v7f6xw3b0n4s84kqyf
130downloads
0stars
3versions
Updated 1w ago
v1.8.3
MIT-0

Night Market Skill — ported from claude-night-market/abstract. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Hooks Evaluation Framework

Overview

This skill provides a detailed framework for evaluating, auditing, and implementing Claude Code hooks across all scopes (plugin, project, global) and both JSON-based and programmatic (Python SDK) hooks.

Key Capabilities

  • Security Analysis: Vulnerability scanning, dangerous pattern detection, injection prevention
  • Performance Analysis: Execution time benchmarking, resource usage, optimization
  • Compliance Checking: Structure validation, documentation requirements, best practices
  • SDK Integration: Python SDK hook types, callbacks, matchers, and patterns

Core Components

ComponentPurpose
Hook Types ReferenceComplete SDK hook event types and signatures
Evaluation CriteriaScoring system and quality gates
Security PatternsCommon vulnerabilities and mitigations
Performance BenchmarksThresholds and optimization guidance

Quick Reference

Hook Event Types

HookEvent = Literal[
    "PreToolUse",       # Before tool execution
    "PostToolUse",      # After tool execution
    "UserPromptSubmit", # When user submits prompt
    "Stop",             # When stopping execution
    "SubagentStop",     # When a subagent stops
    "TeammateIdle",     # When teammate agent becomes idle (2.1.33+)
    "TaskCompleted",    # When a task finishes execution (2.1.33+)
    "PreCompact"        # Before message compaction
]

Verification: Run the command with --help flag to verify availability.

Note: Python SDK does not support SessionStart, SessionEnd, or Notification hooks due to setup limitations. However, plugins can define SessionStart hooks via hooks.json using shell commands (e.g., leyline's detect-git-platform.sh).

Plugin-Level hooks.json

Plugins can declare hooks via "hooks": "./hooks/hooks.json" in plugin.json. The evaluator validates:

  • Referenced hooks.json exists and is valid JSON
  • Shell commands referenced in hooks exist and are executable
  • Hook matchers use valid event types

Hook Callback Signature

async def my_hook(
    input_data: dict[str, Any],    # Hook-specific input
    tool_use_id: str | None,       # Tool ID (for tool hooks)
    context: HookContext           # Additional context
) -> dict[str, Any]:               # Return decision/messages
    ...

Verification: Run the command with --help flag to verify availability.

Return Values

return {
    "hookSpecificOutput": {
        "hookEventName": "PreToolUse",       # Match hook type
        "permissionDecision": "deny",        # Optional: block action
        "permissionDecisionReason": "...",   # Reason for denial
        "additionalContext": "...",          # Optional: context added
    }
}

Verification: Run the command with --help flag to verify availability.

Quality Scoring (100 points)

CategoryPointsFocus
Security30Vulnerabilities, injection, validation
Performance25Execution time, memory, I/O
Compliance20Structure, documentation, error handling
Reliability15Timeouts, idempotency, degradation
Maintainability10Code structure, modularity

Detailed Resources

  • SDK Hook Types: See modules/sdk-hook-types.md for complete Python SDK type definitions, patterns, and examples
  • Evaluation Criteria: See modules/evaluation-criteria.md for detailed scoring rubric and quality gates
  • Security Patterns: See modules/sdk-hook-types.md for vulnerability detection and mitigation
  • Performance Guide: See modules/evaluation-criteria.md for benchmarking and optimization

Basic Evaluation Workflow

# 1. Run detailed evaluation
/hooks-eval --detailed

# 2. Focus on security issues
/hooks-eval --security-only --format sarif

# 3. Benchmark performance
/hooks-eval --performance-baseline

# 4. Check compliance
/hooks-eval --compliance-report

Verification: Run the command with --help flag to verify availability.

Integration with Other Tools

# Complete plugin evaluation pipeline
/hooks-eval --detailed          # Evaluate all hooks
/analyze-hook hooks/specific.py      # Deep-dive on one hook
/validate-plugin .                   # Validate overall structure

Verification: Run the command with --help flag to verify availability.

Related Skills

  • abstract:hook-scope-guide - Decide where to place hooks (plugin/project/global)
  • abstract:hook-authoring - Write hook rules and patterns
  • abstract:validate-plugin - Validate complete plugin structure

Troubleshooting

Common Issues

Hook not firing Verify hook pattern matches the event. Check hook logs for errors

Syntax errors Validate JSON/Python syntax before deployment

Permission denied Check hook file permissions and ownership

Comments

Loading comments...