Auto Coding

WarnAudited by ClawScan on May 10, 2026.

Overview

The coding workflow is mostly coherent, but the package contains hardcoded LLM API keys and reuses local Nanobot credentials without matching registry declarations.

Review before installing. Do not run the bundled model-test scripts unless the hardcoded API keys have been removed and rotated. Confirm which LLM provider and Nanobot credentials will be used, run generated code only in a sandbox or disposable workspace, and avoid sending private code or secrets to external providers.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running bundled test or benchmark scripts could use or expose someone else's LLM account credentials, create billing or abuse risk, and indicates unsafe credential handling.

Why it was flagged

The package includes real-looking third-party LLM API tokens, and the same file uses these values as Bearer tokens for Bailian/MiniMax API calls.

Skill content
API_KEYS = {"bailian": "sk-sp-...", "minimax": "sk-api-..."}
Recommendation

Remove all embedded API keys, rotate/revoke the exposed tokens, and load credentials only from user-approved environment variables or config paths that are declared in metadata.

What this means

The skill may spend quota on the user's existing Nanobot LLM account and send coding prompts through that configured provider without the registry clearly signaling this credential use.

Why it was flagged

This is local auth/profile access for LLM API credentials, but the registry metadata lists no primary credential and no required config path.

Skill content
复用 nanobot 的 LLM 配置(从 `~/.nanobot/config.json` 读取)
Recommendation

Declare the config path and credential requirement, show which provider/model will be used, and require clear user approval before using stored LLM credentials.

What this means

If generated code is unsafe, tests or execution could affect local files, network resources, or the workspace.

Why it was flagged

Generating runnable code and running tests is central to this skill, but it means user- or model-generated code may execute locally.

Skill content
4. **实现代码** - 生成可运行的代码; 5. **测试验证** - 运行测试确保代码工作
Recommendation

Use a sandbox or disposable workspace, review generated code before execution, and avoid granting broad filesystem or network access for untrusted tasks.

What this means

Users may not see the dependency and package-install requirements from the registry view alone.

Why it was flagged

SKILL.md declares Python/pip requirements and pip installs, while the registry section says there is no install spec and no required binaries.

Skill content
requires":{"bins":["python","pip"]},"install":[{"kind":"pip","package":"dashscope"},{"kind":"pip","package":"duckduckgo-search"}]
Recommendation

Align registry metadata/install specs with SKILL.md so users can review package installs before enabling the skill.

What this means

A user may over-trust the skill's security posture and miss the embedded credential issue.

Why it was flagged

The test report claims there are no hardcoded keys, but the provided artifacts contain hardcoded API keys and the static scan flags exposed secret literals.

Skill content
| 安全检查 | 无硬编码密钥 | ✅ |
Recommendation

Correct the security report, rerun scans after removing secrets, and avoid publishing broad safety claims that do not match the packaged artifacts.

What this means

User prompts, code snippets, or project details may be sent to the configured LLM provider as part of normal operation.

Why it was flagged

The skill is designed to call an external LLM provider for code generation and review.

Skill content
"provider": "bailian", "model": "qwen3.5-plus", "baseUrl": "https://coding.dashscope.aliyuncs.com/v1"
Recommendation

Avoid sending secrets or proprietary code unless the provider is approved for that data, and document the external data flow clearly.