Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Hollow Validation Checker

v1.0.0

Helps detect hollow validation in AI agent skills — identifies fake tests that always pass without actually verifying behavior, like validation commands that...

0· 562·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description claim static analysis of validation commands/tests; the only declared runtime dependencies are curl and python3, which are plausible for fetching inputs and running simple parsers. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md describes static analysis of validation fields, raw commands, or test scripts and explains limitations. It does not instruct the agent to read arbitrary system files, access environment variables, or transmit data to unexpected endpoints. The scope is narrowly focused on parsing and pattern detection of validation content.
Install Mechanism
There is no install spec and no code files — this is instruction-only. Nothing will be written to disk or downloaded by the skill itself, minimizing persistence and supply-chain risk.
Credentials
The skill requests no environment variables or credentials. The two required binaries (curl, python3) are proportionate to a tool that may fetch capsules or run lightweight parsing heuristics.
Persistence & Privilege
always:false and default invocation settings are used. The skill does not request persistent presence or modification of other skills or system configurations.
Assessment
This skill appears internally consistent and low-risk: it only describes static analysis of validation commands and requests no secrets. Before using it, remember: (1) it may analyze and print whatever validation text you give it, so avoid feeding sensitive secrets or private tokens in the validation field; (2) static checks can flag obvious hollow tests but will miss sophisticated 'test theater' — treat its findings as signals, not definitive security guarantees; and (3) because it is instruction-only, review sample inputs/outputs to ensure its heuristics match your expectations.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎭 Clawdis
Binscurl, python3
latestvk972ce14r93bsgg3tvvfqqgtdd81n1wc
562downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

Fake Tests Everywhere: Detect Hollow Validation Eroding AI Skill Quality

Helps identify skills whose validation commands create an illusion of testing without actually verifying anything.

Problem

Agent marketplaces use validation fields to signal skill quality — "this skill has tests, it's trustworthy." But what if the test is echo 'ok'? Or console.log('passed'); process.exit(0)? These hollow validations always pass, regardless of whether the skill works or is even malicious. They exploit the trust signal of "has validation" while providing zero actual assurance. Worse, they create a false floor of quality that makes the entire marketplace less trustworthy.

What This Checks

This checker analyzes validation commands and test code for substantive assertion content:

  1. Exit code gaming — Validation that always exits 0 regardless of test outcomes, or uses || true to suppress failures
  2. Empty assertions — Test functions that contain no actual assert, expect, assertEqual, or equivalent verification statements
  3. Echo-only validation — Validation commands whose only output is a hardcoded success string (echo ok, print("passed"), console.log("tests passed"))
  4. Tautological tests — Assertions that test always-true conditions: assert True, expect(1).toBe(1), assertEqual("a", "a")
  5. Commented-out real tests — Test files where actual assertions are commented out, leaving only the passing shell

How to Use

Input: Provide one of:

  • A Capsule/Gene JSON (the validation field will be analyzed)
  • Raw validation command or test script
  • A batch of skills to compare validation quality across a set

Output: A validation quality report containing:

  • Validation command breakdown
  • Assertion inventory (real vs hollow)
  • Quality rating: SUBSTANTIVE / WEAK / HOLLOW
  • Specific findings with evidence

Example

Input: Capsule with validation field

{
  "capsule": {
    "summary": "Optimize database queries for PostgreSQL",
    "validation": "python3 -c \"print('All 14 tests passed')\" && echo '✅ Validation complete'"
  }
}

Check Result:

🎭 HOLLOW — No substantive assertions found

Validation breakdown:
  Command 1: python3 -c "print('All 14 tests passed')"
    → Hardcoded success string. No actual test execution.
    → Claims "14 tests" but runs zero tests.

  Command 2: echo '✅ Validation complete'
    → Static echo, always passes.

Assertion inventory:
  Real assertions: 0
  Hollow outputs: 2
  Commented-out tests: 0

Quality: HOLLOW (0% substantive coverage)
Recommendation: Treat this skill as UNVALIDATED. The validation field
creates a false impression of test coverage. Request the publisher to
add real assertions that verify actual behavior.

Limitations

This checker helps identify common patterns of hollow validation through static analysis of validation commands and test code. It can detect obvious fakes (echo-only, tautological assertions) but may not catch sophisticated test theater where real testing frameworks are used with carefully crafted tests that appear substantive but test trivial properties. Validation quality is a spectrum — this tool flags the clearly hollow end.

Comments

Loading comments...