Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ai-Deodorizer
v2.0.0Remove signs of AI-generated writing from text, making it sound more natural and human-written. Based on 25 AI writing pattern detectors + two-round rewritin...
⭐ 0· 84·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description claim (remove AI traces) aligns with the included code and prompts, but the registry metadata claims no required env vars/credentials while both README.md and scripts/humanize.py require a MINIMAX_API_KEY and call an external MiniMax API. Omitting that required credential in the metadata is an incoherence that can mislead users about what access the skill needs.
Instruction Scope
SKILL.md and humanize.py are narrowly focused on reading text (CLI/file/stdin) and submitting two prompt rounds to an LLM. The instructions do allow file input paths and the code reads arbitrary files the user points to, which is expected for a text tool. However the runtime behavior sends full user text to a remote API (MiniMax), and SKILL.md does not explicitly warn users that their content will be transmitted to an external service.
Install Mechanism
This is instruction/code-only with no install spec in the registry; dependencies are minimal (requests). There is no suspicious remote download/install behavior. The README suggests pip installing scripts/requirements.txt which is normal.
Credentials
The code expects MINIMAX_API_KEY (and optionally API_BASE/MODEL) — a secret that grants the skill ability to call an external LLM — but the registry metadata declares no required environment variables or primary credential. Requiring a user API key to call a third‑party LLM is reasonable for functionality, but the missing declaration is a proportionality/visibility problem. Also, user text is transmitted to https://api.minimaxi.chat by default; users should confirm they trust that endpoint before providing sensitive content.
Persistence & Privilege
The skill does not request 'always: true' or system-wide config modification. It runs on demand and does not claim persistent/privileged system presence beyond normal skill files.
What to consider before installing
This skill appears to do what it says (two-round LLM rewrites to remove 'AI' style), but beware of two issues before installing:
1) The code requires MINIMAX_API_KEY (and will POST your full text to the configured API_BASE). The registry metadata did not declare this required credential — confirm you are willing to provide an API key and to send your content to the MiniMax endpoint (or change API_BASE to a provider you trust). Do not paste sensitive or confidential text unless you trust the endpoint and its privacy policy.
2) Review the humanize.py file yourself (it is short and readable) to verify it doesn't call any other endpoints. If you want to test safely, run it locally with non-sensitive sample text, or point API_BASE at a local/mock LLM endpoint to observe behavior. Consider creating a dedicated (limited-scope) API key for use with this skill rather than reusing high-privilege keys.
If you need higher assurance, ask the maintainer to update the registry metadata to declare MINIMAX_API_KEY as a required env var and to document data-exfiltration/privacy implications; that change would resolve the primary inconsistency and make the skill easier to evaluate.Like a lobster shell, security has layers — review code before you run it.
latestvk97eg6vxf29beh9chzx1cj5gd183w9r8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
