Back to skill
Skillv1.0.0

ClawScan security

Skill Generator · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 8, 2026, 8:12 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The package mostly matches its stated purpose (creating and packaging AI Skills) but contains several practices and instructions that don't fully align with its registry metadata or that increase risk (one‑line remote installer, instructions to copy into agent config dirs, a local HTTP server that can expose workspace files, and vague guidance about publishing that implies credentials but none are declared).
Guidance
This package appears to be a full-featured 'skill generator' toolkit (templates, scripts, eval agents). That said, take these precautions before installing or running anything: 1) Do not run the one‑line curl|bash or PowerShell | iex installer without first downloading and auditing install.sh/install.ps1 — piping remote scripts to a shell is risky. 2) Inspect install.sh and scripts (package_skill.py, simulate_skill.py, generate_review.py, ci_eval.py, any 'publish' or 'upload' code) to see whether they make network calls, call external services, or prompt for/stash credentials. 3) Be cautious about copying files into agent config directories (e.g., ~/.gemini, ~/.claude). That makes the skill persistent and grants it effective reach into your agent environment. 4) The eval viewer starts a local HTTP server that embeds workspace files — only run it in a controlled environment and ensure it is bound to localhost and not exposed to untrusted networks; inspect generate_review.py behavior before use. 5) Expect to need API keys or marketplace credentials for publishing/export features — provide those only when you understand where they are used and are confident scripts won't exfiltrate them. 6) Prefer running the toolkit inside an isolated environment (VM or container) or after manual code review. If you want higher confidence, share install.sh and any publishing scripts for review or run static grep for 'curl', 'requests', 'git push', 'ssh', 'os.environ', and for use of remote domains to see where network traffic may be sent.

Review Dimensions

Purpose & Capability
noteName/description (generate AI Skills) match the repository contents: pipelines, templates, agents, tests, and packaging scripts are present. However README/SKILL.md claim features like 'Package & Publish' and multi‑platform exports that require external accounts/credentials (Anthropic, marketplaces, platform-specific skill dirs) but the registry metadata lists no required environment variables or primary credential — a functional mismatch. The included files (scripts/, phases/, resources/) are appropriate for a skill-creation toolkit, so purpose is plausible, but publishing/integration steps will need user credentials not declared.
Instruction Scope
concernSKILL.md describes an 8‑phase pipeline and instructs the agent to read resources and use scripts. That scope is expected. Concerning items in the instructions and repo: (1) an explicit recommendation to copy the package into platform config directories (e.g., ~/.gemini/antigravity/skills, .claude/commands) — this writes into user agent configs and grants persistence; (2) an explicit guideline 'Black Box Scripts: AI uses --help to self-learn, DO NOT read source code' — that discourages code inspection and is a weak safety stance; (3) several tools (e.g., eval viewer, HTTP server) read workspace directories and serve embedded outputs, which could expose sensitive files if run without care. The SKILL.md does not explicitly instruct reading unrelated system files, but some agents (analyzer/grader) expect to read transcripts, skill files, and outputs under the workspace — normal for an eval system but worth caution.
Install Mechanism
concernThe registry lists no install spec, but README advertises a 'one‑click' install that pipes a raw.githubusercontent.com script into bash (and a PowerShell iex command for Windows). While raw.githubusercontent.com is a common host, piping remote shell scripts to the shell is high‑risk. There is also guidance to copy the repo into various platform config directories (writing files into user agent folders). The package itself contains many scripts that will be written to disk and can be executed; the absence of an official, reviewed release and the suggested curl|bash install are risk factors.
Credentials
concernThe registry metadata declares no required environment variables or primary credential. Yet the README and SKILL.md describe integration with external AI platforms (Anthropic/Claude, Antigravity, others) and 'Package & Publish' to marketplaces — actions that typically require API keys or tokens. The mismatch (advertised network/publishing features vs. no declared creds) means users may be prompted later to provide sensitive credentials or scripts may attempt network interactions without explicit declared requirements. Additionally, the included tooling can read workspace outputs and could inadvertently leak secrets if those outputs contain credentials.
Persistence & Privilege
notealways:false and user-invocable:true (normal). The README's installation instructions encourage copying the skill into global agent skill/config directories (persistent installation) and the toolkit includes a local HTTP server (eval viewer) that serves workspace files and auto-saves feedback.json — that increases the chance of exposing local files. There is no evidence the skill modifies other skills' configs, but it does instruct writing into agent-specific config paths which is a meaningful persistence/privilege decision and worth reviewing before use.