Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Monet Works — Content QA Remediation

v0.1.0

QA remediation auto-fix pipeline for Monet Works content. Detects and repairs common content issues: banned phrases, missing disclaimers, missing CTAs, and e...

0· 55·0 current·0 all-time
byRunByDaVinci@clawdiri-ai
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The declared purpose (banned phrases, disclaimers, CTAs, length trimming) matches the included data and scripts: the data/ JSON files and auto-remediate.py implement those features. However the SKILL.md repeatedly references a 'content-qa' CLI and configuration paths under 'references/' that are not present in the package (the repo uses scripts/auto-remediate.py, scripts/remediate.sh and data/). This mismatch between documentation and actual files could cause surprises and indicates sloppy packaging/documentation.
!
Instruction Scope
The runtime instructions describe piping content through a 'content-qa' CLI and mention integration with an external 'ogilvy-humanizer' skill and 'AI model' summarization. The included scripts operate on local files and templates and appear to implement most fixes locally, but README and SKILL.md state that an LLM (openai/anthropic) is required for some substitutions — the scripts in the manifest do not declare how API keys should be provided or whether network calls are made. The SKILL.md also references config paths ('references/') that don't exist in the package, increasing the risk of runtime errors or unexpected behavior.
Install Mechanism
There is no install spec and this is effectively an instruction+script bundle. No remote downloads or installers are present in the manifest, which lowers supply-chain risk. The code is local and executable via the provided shell wrapper.
!
Credentials
The README and SKILL.md state the tool needs an LLM library (openai or anthropic) for phrase substitution and summary generation, which in practice requires API credentials (e.g., OPENAI_API_KEY or ANTHROPIC_API_KEY). The skill declares no required environment variables or primary credential. This is an omission: a tool that can call an external LLM should declare credential requirements. Also the integration with another skill ('ogilvy-humanizer') is mentioned but not specified (no declared interface or auth), leaving unclear what permissions/context an agent would need.
Persistence & Privilege
The skill does not request always: true, does not require system-wide configuration changes, and is a user-invocable script. It does write only to output paths supplied by the caller (stdout/stderr or user-specified files). There is no evidence it modifies other skills or system agent settings.
What to consider before installing
This package implements a plausible content QA fixer, but before you run it on real drafts: 1) Confirm which executable to run (the repo ships scripts/remediate.sh and scripts/auto-remediate.py; SKILL.md's 'content-qa' name is inconsistent). 2) Inspect auto-remediate.py for any network/HTTP calls or explicit calls to an LLM SDK (openai/anthropic); if it calls an external API, decide where API keys will come from and avoid running it with privileged credentials. 3) Because the docs reference 'references/' but templates are in data/, ensure the script will find the correct template files in your environment or update the paths. 4) If you expect the skill to call another internal skill (ogilvy-humanizer), get clarity on that integration and required permissions. 5) Run the script on non-sensitive test content in an isolated environment first and review the change-report JSON before trusting automated modifications. If the author can confirm (a) whether the script makes network/LLM calls and (b) the exact env vars needed, that would raise confidence and may resolve the current concerns.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f8t3qqe7q072h55jk2sqdzx83cyc6

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments