Back to skill
v2.1.2

Auto Coding

ReviewClawScan verdict for this skill. Analyzed May 1, 2026, 7:08 AM.

Analysis

The skill is a coherent coding assistant, but it exposes live-looking API keys and uses local Nanobot LLM credentials that are not clearly declared in the registry.

GuidanceReview carefully before installing. The coding workflow itself is coherent, but the embedded API keys should be removed and rotated, credential use should be declared, and generated code should be run only after review in a limited workspace.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Human-Agent Trust Exploitation
SeverityMediumConfidenceHighStatusConcern
TEST_REPORT.md
安全检查 | 无硬编码密钥 | ✅

The test report claims there are no hardcoded secrets, while the provided artifacts contain exposed API keys.

User impactA user may incorrectly trust the package's security posture and miss the credential exposure risk.
RecommendationTreat the security claims as stale or incorrect until the embedded keys are removed and the security report is regenerated.
Agentic Supply Chain Vulnerabilities
SeverityLowConfidenceHighStatusNote
SKILL.md
"requires":{"bins":["python","pip"]},"install":[{"package":"dashscope"},{"package":"duckduckgo-search"}]

The skill metadata asks for Python/pip and unpinned PyPI packages, while the registry section says there is no install spec.

User impactInstalling the skill may add external packages whose versions and behavior can change over time.
RecommendationReview the dependencies before installing, pin versions where possible, and align the registry metadata with the SKILL.md requirements.
Unexpected Code Execution
SeverityLowConfidenceMediumStatusNote
SKILL.md
测试验证 - 运行测试确保代码工作

The skill expects generated code or tests to be run as part of validating coding output.

User impactGenerated code may interact with local files, network resources, or tools if executed without review.
RecommendationInspect generated code first and run it only in a controlled workspace or sandbox.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityCriticalConfidenceHighStatusConcern
9model_test.py
API_KEYS = { "bailian": "sk-sp-f5a...", "minimax": "sk-api-YOuc..." }

The distributed source embeds live-looking LLM provider API tokens instead of using placeholders or user-supplied credentials.

User impactAnyone who obtains the package could abuse the exposed keys, and running the included test code may use someone else's provider account or incur costs.
RecommendationRemove the keys from the package, revoke or rotate the exposed credentials, use environment/config placeholders, and declare credential requirements explicitly.
Identity and Privilege Abuse
SeverityHighConfidenceHighStatusConcern
OPTIMIZATION_SUMMARY.md
config_path = Path.home() / ".nanobot" / "config.json" ... api_key = provider_config.get("apiKey", "")

The design explicitly reads API keys from the user's persistent Nanobot configuration to call the configured LLM provider.

User impactInstalling and using the skill can cause your existing LLM provider credentials to be used for generated-code tasks, billing, and provider-side logging.
RecommendationInstall only if you trust the package, use least-privilege or disposable provider credentials where possible, and require the registry metadata to declare this credential access clearly.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityLowConfidenceHighStatusNote
9model_test.py
api_base = "https://coding.dashscope.aliyuncs.com/v1" ... api_base = "https://api.minimaxi.com/anthropic/v1"

The code sends prompts and model requests to external LLM provider endpoints.

User impactTask prompts, generated code, and related context may leave the local environment and be processed by external providers.
RecommendationAvoid putting secrets or private code into prompts unless you trust the configured provider and its data handling.