OpenClaw TDD
v1.0.0Test-Driven Development assistant. Generates test cases from code or specifications, runs tests, tracks coverage, and guides the red-green-refactor cycle. Su...
⭐ 0· 46·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description (TDD assistant: generate tests, run tests, track coverage, guide cycle) matches the included scripts (generator.py, runner.py, coverage.py, main.py) and the SKILL.md examples. No unrelated binaries, env vars, or external services are requested.
Instruction Scope
SKILL.md instructs running the bundled scripts (python3 scripts/main.py ...) which in turn run pytest/unittest and create or print test files and coverage reports. This is within purpose, but the runner executes the project's test code (via pytest/unittest) and the generator writes test files into the repo — meaning arbitrary project code will be executed when you run tests and files may be created/overwritten. That's expected for a TDD tool but is a security consideration when used on untrusted code.
Install Mechanism
No install spec; this is an instruction + script bundle, so nothing is downloaded or installed by the skill itself. The scripts call system-installed Python and testing tools (pytest/unittest/coverage), which is appropriate for this tool.
Credentials
The skill declares no environment variables, credentials, or config paths. The code uses /tmp for intermediate JSON reports and reads/writes files in the provided project root — behavior consistent with coverage/test tooling and proportional to the stated purpose.
Persistence & Privilege
always is false and the skill does not attempt to persist configuration outside the project (it writes tests and htmlcov under project paths, and uses /tmp for ephemeral reports). It does not modify other skills or global agent settings.
Assessment
This skill appears to do what it claims. Important safety notes before installing/using: (1) Running tests will execute the project's code — do not run on untrusted repositories or without sandboxing (use containers or VMs). (2) The generator can write/overwrite test files in your repo; review generated tests before committing. (3) The scripts call pytest/unittest via subprocess and write/read JSON to /tmp; concurrent runs may conflict. Recommended practice: run in an isolated environment (virtualenv/container), inspect generated files, and ensure test execution privileges and network access are limited for untrusted code.Like a lobster shell, security has layers — review code before you run it.
latestvk976vwhht8q5nehmyqc2nmqqts841sjt
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
