Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nm Sanctum Test Updates

v1.0.0

Update, generate, and validate tests using git-workspace-review for change context

0· 113·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for athola/nm-sanctum-test-updates.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Nm Sanctum Test Updates" (athola/nm-sanctum-test-updates) from ClawHub.
Skill page: https://clawhub.ai/athola/nm-sanctum-test-updates
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Config paths to check: night-market.test-driven-development, night-market.git-workspace-review, night-market.file-analysis
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install nm-sanctum-test-updates

ClawHub CLI

Package manager switcher

npx clawhub@latest install nm-sanctum-test-updates
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (test updates, generation, validation) align with the declared config requirements (night-market.test-driven-development, night-market.git-workspace-review, night-market.file-analysis) and with the modules' content (TDD/BDD test generation, discovery, validation). No unrelated credentials or binaries are requested.
!
Instruction Scope
SKILL.md instructs running python scripts under plugins/sanctum/scripts (test_analyzer.py, test_generator.py, quality_checker.py) and using Skill(sanctum:git-workspace-review). However, this bundle contains only markdown modules and no scripts or executable code; the instructions therefore reference external artifacts or another plugin (Claude Code). The skill also directs the agent to scan local source (src/, tests/) and run pytest, which will execute user code — expected for a test tool but something to be aware of. Because the operational scripts are not present, it's unclear what code would run when invoked.
Install Mechanism
Instruction-only skill with no install spec or download steps (lower risk). Nothing in the manifest installs third-party binaries or fetches remote archives.
Credentials
No environment variables, credentials, or external endpoints are requested. The declared required config paths are configuration keys for Night Market test workflows and are proportionate to the stated purpose. No secret exfiltration indicators in the files.
Persistence & Privilege
always:false and default autonomous invocation settings are used. The skill does not request permanent agent-wide privileges or modify other skills' configurations in the provided materials.
What to consider before installing
This skill appears to be a coherent TDD/BDD test-generation and validation guide, but the runtime instructions expect Python scripts and an external 'Claude Code' integration that are not included in this package. Before installing or invoking: (1) verify the referenced scripts (plugins/sanctum/scripts/*.py) actually exist in your environment or repository and inspect their code; (2) be aware that invoking the skill will run test tooling (pytest) against your codebase which executes your tests and any test-time code — run in an isolated environment or CI runner if you’re unsure; (3) confirm you have the complementary Night Market/Claude Code plugin or equivalent hooks the skill expects; (4) if you do not control or cannot review the missing scripts, avoid granting the skill autonomous invocation or running it on sensitive repositories. If you want higher confidence, request the missing script files or a full release artifact so you can review the runtime code that will be executed.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦞 Clawdis
Confignight-market.test-driven-development, night-market.git-workspace-review, night-market.file-analysis
latestvk977dnabbr2xck36qz82k20vjh857yrt
113downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Night Market Skill — ported from claude-night-market/sanctum. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Table of Contents

Test Updates and Maintenance

Overview

detailed test management system that applies TDD/BDD principles to maintain, generate, and enhance tests across codebases. This skill practices what it preaches - it uses TDD principles for its own development and serves as a living example of best practices.

Core Philosophy

  • RED-GREEN-REFACTOR: Strict adherence to TDD cycle
  • Behavior-First: BDD patterns that describe what code should do
  • Meta Dogfooding: The skill's own tests demonstrate the principles it teaches
  • Quality Gates: detailed validation before considering tests complete

What It Is

A modular test management system that:

  • Discovers what needs testing or updating
  • Generates tests following TDD principles
  • Enhances existing tests with BDD patterns
  • Validate test quality through multiple lenses

Quick Start

Quick Checklist for First Time Use

  • validate pytest is installed (pip install pytest)
  • Have your source code in src/ or similar directory
  • Create a tests/ directory if it doesn't exist
  • Run Skill(sanctum:git-workspace-review) first to understand changes
  • Start with Skill(test-updates) --target <specific-module> for focused updates

detailed Test Update

# Run full test update workflow
Skill(test-updates)

Verification: Run pytest -v to verify tests pass.

Targeted Test Updates

# Update tests for specific paths
Skill(test-updates) --target src/sanctum/agents
Skill(test-updates) --target tests/test_commit_messages.py

Verification: Run pytest -v to verify tests pass.

TDD for New Features

# Apply TDD to new code
Skill(test-updates) --tdd-only --target new_feature.py

Verification: Run pytest -v to verify tests pass.

Using the Scripts Directly

Human-Readable Output:

# Analyze test coverage gaps
python plugins/sanctum/scripts/test_analyzer.py --scan src/

# Generate test scaffolding
python plugins/sanctum/scripts/test_generator.py \
    --source src/my_module.py --style pytest_bdd

# Check test quality
python plugins/sanctum/scripts/quality_checker.py \
    --validate tests/test_my_module.py

Verification: Run pytest -v to verify tests pass.

Programmatic Output (for Claude Code):

# Get JSON output for programmatic parsing - test_analyzer
python plugins/sanctum/scripts/test_analyzer.py \
    --scan src/ --output-json

# Returns:
# {
#   "success": true,
#   "data": {
#     "source_files": ["src/module.py", ...],
#     "test_files": ["tests/test_module.py", ...],
#     "uncovered_files": ["module_without_tests", ...],
#     "coverage_gaps": [{"file": "...", "reason": "..."}]
#   }
# }

# Get JSON output - test_generator
python plugins/sanctum/scripts/test_generator.py \
    --source src/my_module.py --output-json

# Returns:
# {
#   "success": true,
#   "data": {
#     "test_file": "path/to/test_my_module.py",
#     "source_file": "src/my_module.py",
#     "style": "pytest_bdd",
#     "fixtures_included": true,
#     "edge_cases_included": true,
#     "error_cases_included": true
#   }
# }

# Get JSON output - quality_checker
python plugins/sanctum/scripts/quality_checker.py \
    --validate tests/test_my_module.py --output-json

# Returns:
# {
#   "success": true,
#   "data": {
#     "static_analysis": {...},
#     "dynamic_validation": {...},
#     "metrics": {...},
#     "quality_score": 85,
#     "quality_level": "QualityLevel.GOOD",
#     "recommendations": [...]
#   }
# }

Verification: Run pytest -v to verify tests pass.

When To Use It

Use this skill when you need to:

  • Update tests after code changes
  • Generate tests for new features
  • Improve existing test quality
  • validate detailed test coverage

Perfect for:

  • Pre-commit test validation
  • CI/CD pipeline integration
  • Refactoring with test safety
  • Onboarding new developers

When NOT To Use

  • Auditing test suites - use pensive:test-review
  • Writing production code
    • focus on implementation first
  • Auditing test suites - use pensive:test-review
  • Writing production code
    • focus on implementation first

Workflow Integration

Phase 1: Discovery

  1. Scan codebase for test gaps
  2. Analyze recent changes
  3. Identify broken or outdated tests

See modules/test-discovery.md for detection patterns.

Phase 2: Strategy

  1. Choose appropriate BDD style (see modules/bdd-patterns.md)
  2. Plan test structure
  3. Define quality criteria

Phase 3: Implementation

  1. Write failing tests (RED) - see modules/tdd-workflow.md
  2. Implement minimal passing code (GREEN)
  3. Refactor for clarity (REFACTOR)

See modules/test-generation.md for generation templates.

Phase 4: Validation

  1. Static analysis and linting
  2. Dynamic test execution
  3. Coverage and quality metrics

See modules/quality-validation.md for validation criteria.

Quality Assurance

The skill applies multiple quality checks:

  • Static: Linting, type checking, pattern validation
  • Dynamic: Test execution in sandboxed environments
  • Metrics: Coverage, mutation score, complexity analysis
  • Review: Structured checklists for peer validation

Examples

BDD-Style Test Generation

See modules/bdd-patterns.md for additional patterns.

class TestGitWorkflow:
    """BDD-style tests for Git workflow operations."""

    def test_commit_workflow_with_staged_changes(self):
        """
        GIVEN a Git repository with staged changes
        WHEN the user runs the commit workflow
        THEN it should create a commit with proper message format
        AND all tests should pass
        """
        # Test implementation following TDD principles
        pass

Verification: Run pytest -v to verify tests pass.

Test Enhancement

  • Add edge cases and error scenarios
  • Include performance benchmarks
  • Add mutation testing for robustness

See modules/test-enhancement.md for enhancement strategies.

Integration with Existing Skills

  1. git-workspace-review: Get context of changes
  2. file-analysis: Understand code structure
  3. test-driven-development: Apply strict TDD discipline
  4. skills-eval: Validate quality and compliance

Success Metrics

  • Test coverage > 85%
  • All tests follow BDD patterns
  • Zero broken tests in CI
  • Mutation score > 80%

Troubleshooting FAQ

Common Issues

Q: Tests are failing after generation A: This is expected! The skill follows TDD principles - generated tests are designed to fail first. Follow the RED-GREEN-REFACTOR cycle:

  1. Run the test and confirm it fails for the right reason
  2. Implement minimal code to make it pass
  3. Refactor for clarity

Q: Quality score is low despite having tests A: Check for these common issues:

  • Missing BDD patterns (Given/When/Then)
  • Vague assertions like assert result is not None
  • Tests without documentation
  • Long, complex tests (>50 lines)

Q: Generated tests don't match my code structure A: The scripts analyze AST patterns and may need guidance:

  • Use --style flag to match your preferred BDD style
  • Check that source files have proper function/class definitions
  • Review the generated scaffolding and customize as needed

Q: Mutation testing takes too long A: Mutation testing is resource-intensive:

  • Use --quick-mutation flag for subset testing
  • Focus on critical modules first
  • Run overnight for detailed analysis

Q: Can't find tests for my file A: The analyzer uses naming conventions:

  • Source: my_module.py → Test: test_my_module.py
  • Check that test files follow pytest naming patterns
  • validate test directory structure is standard

Performance Tips

  • Large codebases: Use --target to focus on specific directories
  • CI integration: Run validation in parallel with other checks
  • Memory usage: Process files in batches for very large projects

Getting Help

  1. Check script outputs for detailed error messages
  2. Use --verbose flag for more information
  3. Review the validation report for specific recommendations
  4. Start with small modules to understand patterns before scaling

Comments

Loading comments...