Auto Coding
WarnAudited by ClawScan on May 10, 2026.
Overview
The coding workflow is mostly coherent, but the package contains hardcoded LLM API keys and reuses local Nanobot credentials without matching registry declarations.
Review before installing. Do not run the bundled model-test scripts unless the hardcoded API keys have been removed and rotated. Confirm which LLM provider and Nanobot credentials will be used, run generated code only in a sandbox or disposable workspace, and avoid sending private code or secrets to external providers.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running bundled test or benchmark scripts could use or expose someone else's LLM account credentials, create billing or abuse risk, and indicates unsafe credential handling.
The package includes real-looking third-party LLM API tokens, and the same file uses these values as Bearer tokens for Bailian/MiniMax API calls.
API_KEYS = {"bailian": "sk-sp-...", "minimax": "sk-api-..."}Remove all embedded API keys, rotate/revoke the exposed tokens, and load credentials only from user-approved environment variables or config paths that are declared in metadata.
The skill may spend quota on the user's existing Nanobot LLM account and send coding prompts through that configured provider without the registry clearly signaling this credential use.
This is local auth/profile access for LLM API credentials, but the registry metadata lists no primary credential and no required config path.
复用 nanobot 的 LLM 配置(从 `~/.nanobot/config.json` 读取)
Declare the config path and credential requirement, show which provider/model will be used, and require clear user approval before using stored LLM credentials.
If generated code is unsafe, tests or execution could affect local files, network resources, or the workspace.
Generating runnable code and running tests is central to this skill, but it means user- or model-generated code may execute locally.
4. **实现代码** - 生成可运行的代码; 5. **测试验证** - 运行测试确保代码工作
Use a sandbox or disposable workspace, review generated code before execution, and avoid granting broad filesystem or network access for untrusted tasks.
Users may not see the dependency and package-install requirements from the registry view alone.
SKILL.md declares Python/pip requirements and pip installs, while the registry section says there is no install spec and no required binaries.
requires":{"bins":["python","pip"]},"install":[{"kind":"pip","package":"dashscope"},{"kind":"pip","package":"duckduckgo-search"}]Align registry metadata/install specs with SKILL.md so users can review package installs before enabling the skill.
A user may over-trust the skill's security posture and miss the embedded credential issue.
The test report claims there are no hardcoded keys, but the provided artifacts contain hardcoded API keys and the static scan flags exposed secret literals.
| 安全检查 | 无硬编码密钥 | ✅ |
Correct the security report, rerun scans after removing secrets, and avoid publishing broad safety claims that do not match the packaged artifacts.
User prompts, code snippets, or project details may be sent to the configured LLM provider as part of normal operation.
The skill is designed to call an external LLM provider for code generation and review.
"provider": "bailian", "model": "qwen3.5-plus", "baseUrl": "https://coding.dashscope.aliyuncs.com/v1"
Avoid sending secrets or proprietary code unless the provider is approved for that data, and document the external data flow clearly.
