Back to skill
v1.0.0

Auto Test

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 8:01 AM.

Analysis

This looks like a normal unit-test generation skill, but users should review any local scripts and test-running commands before using it.

GuidanceThis skill appears purpose-aligned for generating unit tests. Before using it, verify any referenced local script, run a dry run first, limit it to the intended project directory, and review generated tests before running coverage or committing changes.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
SKILL.md
`--path` | 项目路径 | `.` ... `--output` | 输出文件/目录 | `./tests/`

The skill is designed to read a project directory and write generated test files. This is expected for test generation, but it gives the agent file-level influence over the user's codebase.

User impactGenerated tests may be created or changed inside the project, which could affect commits, builds, or review results.
RecommendationUse `--dry-run` first, scope `--path` to the intended project, and review generated files before committing or running them.
Agentic Supply Chain Vulnerabilities
SeverityInfoConfidenceMediumStatusNote
SKILL.md
python3 scripts/generate-tests.py --path /path/to/project --framework pytest

The documented workflow references a local helper script, while the supplied package is instruction-only with no code files. This is not suspicious by itself, but the actual script provenance should be checked before execution.

User impactIf a user runs an untrusted or unexpected local script at that path, behavior could differ from the documented skill.
RecommendationConfirm where `scripts/generate-tests.py` comes from and inspect it before running the command.
Unexpected Code Execution
SeverityLowConfidenceHighStatusNote
SKILL.md
覆盖率计算 - 运行测试,生成覆盖率报告

Coverage generation involves running tests, which can execute project code. This is central to the stated purpose, but it is still a behavior users should notice.

User impactRunning tests may trigger side effects from the project under test, such as file writes, network calls, or use of local configuration if the project tests do those things.
RecommendationRun coverage in a controlled development environment and inspect the generated tests before executing them.