Back to skill
Skillv0.1.0

ClawScan security

Monet Works — Content QA Remediation · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 22, 2026, 12:36 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill appears to implement a sensible content QA fixer, but documentation and runtime instructions mismatch the shipped files and it implies use of LLM/networked services without declaring required credentials — these inconsistencies warrant caution before installing or running it on real content.
Guidance
This package implements a plausible content QA fixer, but before you run it on real drafts: 1) Confirm which executable to run (the repo ships scripts/remediate.sh and scripts/auto-remediate.py; SKILL.md's 'content-qa' name is inconsistent). 2) Inspect auto-remediate.py for any network/HTTP calls or explicit calls to an LLM SDK (openai/anthropic); if it calls an external API, decide where API keys will come from and avoid running it with privileged credentials. 3) Because the docs reference 'references/' but templates are in data/, ensure the script will find the correct template files in your environment or update the paths. 4) If you expect the skill to call another internal skill (ogilvy-humanizer), get clarity on that integration and required permissions. 5) Run the script on non-sensitive test content in an isolated environment first and review the change-report JSON before trusting automated modifications. If the author can confirm (a) whether the script makes network/LLM calls and (b) the exact env vars needed, that would raise confidence and may resolve the current concerns.

Review Dimensions

Purpose & Capability
concernThe declared purpose (banned phrases, disclaimers, CTAs, length trimming) matches the included data and scripts: the data/ JSON files and auto-remediate.py implement those features. However the SKILL.md repeatedly references a 'content-qa' CLI and configuration paths under 'references/' that are not present in the package (the repo uses scripts/auto-remediate.py, scripts/remediate.sh and data/). This mismatch between documentation and actual files could cause surprises and indicates sloppy packaging/documentation.
Instruction Scope
concernThe runtime instructions describe piping content through a 'content-qa' CLI and mention integration with an external 'ogilvy-humanizer' skill and 'AI model' summarization. The included scripts operate on local files and templates and appear to implement most fixes locally, but README and SKILL.md state that an LLM (openai/anthropic) is required for some substitutions — the scripts in the manifest do not declare how API keys should be provided or whether network calls are made. The SKILL.md also references config paths ('references/') that don't exist in the package, increasing the risk of runtime errors or unexpected behavior.
Install Mechanism
okThere is no install spec and this is effectively an instruction+script bundle. No remote downloads or installers are present in the manifest, which lowers supply-chain risk. The code is local and executable via the provided shell wrapper.
Credentials
concernThe README and SKILL.md state the tool needs an LLM library (openai or anthropic) for phrase substitution and summary generation, which in practice requires API credentials (e.g., OPENAI_API_KEY or ANTHROPIC_API_KEY). The skill declares no required environment variables or primary credential. This is an omission: a tool that can call an external LLM should declare credential requirements. Also the integration with another skill ('ogilvy-humanizer') is mentioned but not specified (no declared interface or auth), leaving unclear what permissions/context an agent would need.
Persistence & Privilege
okThe skill does not request always: true, does not require system-wide configuration changes, and is a user-invocable script. It does write only to output paths supplied by the caller (stdout/stderr or user-specified files). There is no evidence it modifies other skills or system agent settings.