Back to skill
Skillv5.0.0
ClawScan security
AI Humanizer CN - 中文 AI 文本拟人优化 · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousMar 18, 2026, 3:47 PM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's documentation asks for multiple external AI API keys and model selection, but the included source files (as provided) implement purely local text transformations and show no clear use of those credentials — this mismatch is unexplained and warrants caution.
- Guidance
- There is a clear mismatch: the docs ask you to provide OpenAI/Anthropic/阿里 API keys and to pick remote models, but the included Python files (as shown) perform purely local text transformations without any obvious API calls. Before installing or supplying credentials, do the following: 1) Inspect the entire package (including the truncated file parts, setup.py, and any dynamic imports) for network calls (requests, urllib, sockets) or hard-coded endpoints. 2) Check the published PyPI/GitHub package (verify author and release history) — malicious or typosquatted packages sometimes mimic names. 3) Run the package in an isolated/sandbox environment and observe outbound network activity before providing real API keys; use dummy keys if you need to test. 4) If you only need local processing, prefer not to set API keys; if remote model calls are required, require explicit, documented code paths that use those keys. 5) If uncertain, ask the maintainer for clarification (why three provider keys are needed) or choose a humanizer implementation whose declared capabilities match its code. If you want, I can list exact strings and patterns to search for (requests.get/post, openai/anthropic client imports, environment variable reads) in the repository to help verify.
Review Dimensions
- Purpose & Capability
- concernSKILL.md and metadata advertise multi-provider model support (OpenAI/Anthropic/阿里) and list API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, DASHSCOPE_API_KEY), but the visible Python source implements deterministic/local text transformations (replacements, style templates, numpy usage) with no network calls or API client code. Either the docs overstate remote-model functionality or networking code is hidden/truncated — the requested credentials are not justified by the provided code.
- Instruction Scope
- noteRuntime instructions instruct setting API keys, model selection via HUMANIZER_MODEL, local config (~/.ai-humanizer/config.json), and batch file processing. These are reasonable for a model-calling tool, but inconsistent with the present code which appears to run locally. No instructions explicitly send data to third-party endpoints, yet doc claims both '本地处理' and lists API keys — an ambiguous scope (local-only vs remote-model proxy).
- Install Mechanism
- okNo install spec is embedded in the skill bundle (instruction-only). Installation instructions reference pip/git clone and a PyPI package, which is normal. Declared dependencies (numpy, requests, pyyaml) are typical for a Python project; nothing in the install path is an immediate red flag from the provided files.
- Credentials
- concernSKILL.md metadata requires three provider API keys, but the repository files shown do not reference environment variables or API use. Requesting multiple credentials is disproportionate for the local text-processing code present. Registry metadata also lists no required env vars while SKILL.md does — this contradiction increases risk. The requests dependency could enable network access if code uses it elsewhere (truncated sections).
- Persistence & Privilege
- okSkill is not marked 'always: true' and has no OS restrictions. There is no evidence it modifies other skills or system-wide configs. Autonomous invocation (model invocation enabled) is the platform default and on its own is not a concern here.
