Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Humanizer
v0.1.0Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Uses 28 pattern det...
⭐ 0· 1k·3 current·3 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (humanize AI-generated text) aligns with the included code: analyzers, humanizer logic, CLI, API server, and MCP server. However the registry metadata lists this as instruction-only while the bundle contains many runnable code files (api server, mcp-server). That mismatch (no install spec despite runnable servers) is noteworthy: the codebase expects local installation/run which the metadata doesn't declare.
Instruction Scope
SKILL.md and related docs do exactly what the skill claims (pattern detection and rewriting), but they also include explicit 'Always-On Mode' guidance that tells users to add rules directly into system prompts / custom instructions (e.g., 'NEVER use these words'). Those lines amount to system-prompt override instructions which change an LLM's global behavior beyond per-invocation use. The pre-scan also flagged 'system-prompt-override' in SKILL.md. This expands the skill's scope from an on-demand tool to a mechanism that can persistently change model behaviour if an operator follows those steps.
Install Mechanism
No install spec is declared in the registry (instruction-only), but the package includes a README, package.json, and multiple runnable components (api-server, mcp-server) that assume 'npm install' and 'node' executions. This is not high-risk by itself, but it's an inconsistency the user should be aware of: installing/run steps are manual and the code will create network-accessible servers if you run them.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. The code reviewed does not demand secrets. That is proportionate to the stated functionality (text analysis/humanization).
Persistence & Privilege
always:false (good), but documentation explicitly recommends adding the tool's rules to system prompts or custom instructions ('Always-On Mode'), and the repo provides code (MCP/API servers) that can be integrated into other LLM clients. If you follow the doc's 'Always-On' advice or wire the MCP/API into your LLM environment, the skill effectively gains persistent influence over model outputs. This combination (documentation instructing system-prompt modification + runnable integration servers) elevates the risk profile.
Scan Findings in Context
[system-prompt-override] unexpected: Detected in SKILL.md and related docs (docs/INTEGRATIONS.md, openai-gpt/instructions.md). The content includes explicit instructions and copyable text intended to be pasted into system prompts/custom instructions (e.g., 'NEVER use these words', 'Writing Rules (Always Active)'). While these are meant to enforce writing style, they are not required to run the humanizer as an on-demand tool and would override model behavior globally if applied.
What to consider before installing
What to consider before installing or enabling this skill:
- Treat the code as executable: although the registry lists this as instruction-only, the package contains runnable servers (api-server, mcp-server). Only run the code after reviewing package.json and all src files locally.
- Do NOT blindly paste the 'Always-On' or 'NEVER use these words' sections into any agent/system prompts or your global custom-instructions. Those lines are a form of system-prompt override — they persistently change model behavior and could have unintended side effects across conversations.
- If you want to use the tool, prefer on-demand invocation (run the CLI or call the local API only when needed) rather than applying global system-prompt modifications.
- Review networking/exposure: api-server binds to a port and sets Access-Control-Allow-Origin: '*' (CORS open). If you deploy it publicly, verify authentication and CORS restrictions to avoid exposing text to unintended callers.
- Audit dependencies and omitted files: inspect package.json and run npm audit/scan, and skim all src/*.js for outbound network calls, telemetry, or unexpected file I/O before running.
- Run in isolation first: execute the tool in a sandbox or isolated environment and run the provided tests (npm test) to confirm behavior matches description.
If you want a higher-confidence verdict, provide the package.json at repo root, and the full contents of src/analyzer.js, src/humanizer.js, and any other src files that were truncated — that lets a reviewer check for network calls, hidden endpoints, or credential use. If you do need persistent model behavior, implement it at the integration layer under controlled conditions rather than pasting the skill's 'Always-On' text into your system prompt.Like a lobster shell, security has layers — review code before you run it.
latestvk97fs0gam5ycw4sycdkq2cas1s819rvn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
