suspicious.dynamic_code_execution
- Location
- examples/interleaved-thinking/examples/03_full_optimization.py:995
- Finding
- Dynamic code execution detected.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.dynamic_code_execution, suspicious.prompt_injection_instructions
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If you choose to run the bundled code, you may not have a clear upstream source to trust or audit.
The package provenance is not clear. This matters because the manifest also shows many bundled scripts/examples, even though there is no automatic install or execution path shown.
Source: unknown Homepage: none
Treat the skill as reference material unless you have reviewed the scripts you intend to execute; run examples in a sandbox and verify provenance if possible.
Running that example with unsafe input could evaluate unintended Python expressions.
The static scan found Python eval in an example file. Even with restricted builtins, eval can be risky if expressions are influenced by untrusted users or model output.
result = eval(expression, {"__builtins__": {}}, allowed_names)Do not run the example on untrusted input; replace eval with a safer expression parser or tightly validated allowlist if adapting the code.
If you use these patterns without limits, an agent may save or reuse sensitive or incorrect information across tasks.
The skill purposefully teaches persistent memory and filesystem-based context patterns. These are coherent with the skill, but persisted context can store private data or poisoned instructions if implemented broadly.
Memory architectures range from simple scratchpads to sophisticated temporal knowledge graphs... The file-system-as-memory pattern enables just-in-time context loading
When implementing memory, define allowed paths, retention, deletion controls, review steps, and treat stored memories as untrusted context.
If implemented without safeguards, background or self-spawned agents could keep working, consuming resources, or changing files after the user expects the task to stop.
The skill discusses background agents, persistent snapshots, and self-spawning as hosted-agent infrastructure patterns. This is purpose-aligned guidance, not shown as automatic behavior, but it is a sensitive design area.
Background coding agents run in remote sandboxed environments... filesystem snapshots for session persistence... self-spawning agents for parallel task execution
Require explicit user approval for background workers or self-spawning, and add time limits, resource limits, logging, sandboxing, and a clear stop/cleanup mechanism.