suspicious.dynamic_code_execution
- Location
- scripts/analyze.sh:60
- Finding
- Dynamic code execution detected.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.dynamic_code_execution
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private source code, and any secrets inside that code, could leave the local environment even when the user has not set Anthropic or OpenAI API keys.
The prompt built by this script includes analyzed file contents, and this fallback sends that prompt through the OpenClaw LLM gateway whenever the CLI is available.
result=$(echo "$prompt" | openclaw llm --raw 2>/dev/null) || result=""
Make LLM use explicit, add a documented local-only/no-LLM mode, and clearly disclose the OpenClaw gateway destination before scanning sensitive repositories.
A user could run the skill on sensitive code believing it will stay local, while the OpenClaw LLM fallback may still transmit code content.
This main instruction is incomplete because `scripts/analyze.sh` also tries `openclaw llm` when available; users may infer that unsetting provider keys prevents outbound LLM use.
If no LLM API key is set, the tool falls back to heuristic analysis (less accurate but still useful).
Align SKILL.md, README, SECURITY.md, and runtime behavior; explicitly tell users when any LLM path will be used and how to force heuristic-only analysis.
Configured provider keys authorize external API calls and may incur cost while sending selected code for analysis.
The script uses Anthropic or OpenAI API keys for expected LLM analysis; the visible artifacts do not show hardcoded keys or intentional key logging.
-H "x-api-key: ${ANTHROPIC_API_KEY}" ... -H "Authorization: Bearer ${OPENAI_API_KEY}"Use limited-scope provider keys, avoid scanning secret-heavy repositories with LLM mode enabled, and unset keys when local heuristic analysis is desired.
A hostile repository could make the report or suggested patches misleading if the model follows instructions hidden in code comments.
User/repository code is embedded directly into an LLM prompt, so malicious comments in analyzed code could try to influence the generated findings or fixes.
**File:** \`${file_path}\` ... ${file_content} ... Respond with ONLY a JSON objectTreat analyzed code as untrusted prompt content, keep strict JSON parsing, and require human review before acting on generated findings or patches.
Users relying on registry metadata may miss that the skill depends on local command-line tools and can use LLM credentials/network calls.
The README documents runtime tools and optional credentials, while the registry metadata lists no required binaries, env var declarations, or install spec.
Requirements: bash (4.0+), python3 (stdlib only — no pip installs), curl (for LLM API calls), ANTHROPIC_API_KEY or OPENAI_API_KEY
Declare runtime binaries, optional credential names, and network capability metadata so users can assess the environment requirements before installation.