Agent Skills Context

AdvisoryAudited by Static analysis on May 10, 2026.

Overview

Detected: suspicious.dynamic_code_execution, suspicious.prompt_injection_instructions

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If you choose to run the bundled code, you may not have a clear upstream source to trust or audit.

Why it was flagged

The package provenance is not clear. This matters because the manifest also shows many bundled scripts/examples, even though there is no automatic install or execution path shown.

Skill content
Source: unknown
Homepage: none
Recommendation

Treat the skill as reference material unless you have reviewed the scripts you intend to execute; run examples in a sandbox and verify provenance if possible.

What this means

Running that example with unsafe input could evaluate unintended Python expressions.

Why it was flagged

The static scan found Python eval in an example file. Even with restricted builtins, eval can be risky if expressions are influenced by untrusted users or model output.

Skill content
result = eval(expression, {"__builtins__": {}}, allowed_names)
Recommendation

Do not run the example on untrusted input; replace eval with a safer expression parser or tightly validated allowlist if adapting the code.

What this means

If you use these patterns without limits, an agent may save or reuse sensitive or incorrect information across tasks.

Why it was flagged

The skill purposefully teaches persistent memory and filesystem-based context patterns. These are coherent with the skill, but persisted context can store private data or poisoned instructions if implemented broadly.

Skill content
Memory architectures range from simple scratchpads to sophisticated temporal knowledge graphs... The file-system-as-memory pattern enables just-in-time context loading
Recommendation

When implementing memory, define allowed paths, retention, deletion controls, review steps, and treat stored memories as untrusted context.

What this means

If implemented without safeguards, background or self-spawned agents could keep working, consuming resources, or changing files after the user expects the task to stop.

Why it was flagged

The skill discusses background agents, persistent snapshots, and self-spawning as hosted-agent infrastructure patterns. This is purpose-aligned guidance, not shown as automatic behavior, but it is a sensitive design area.

Skill content
Background coding agents run in remote sandboxed environments... filesystem snapshots for session persistence... self-spawning agents for parallel task execution
Recommendation

Require explicit user approval for background workers or self-spawning, and add time limits, resource limits, logging, sandboxing, and a clear stop/cleanup mechanism.

Findings (4)

critical

suspicious.dynamic_code_execution

Location
examples/interleaved-thinking/examples/03_full_optimization.py:995
Finding
Dynamic code execution detected.
warn

suspicious.prompt_injection_instructions

Location
docs/compression.md:243
Finding
Prompt-injection style instruction pattern detected.
warn

suspicious.prompt_injection_instructions

Location
docs/gemini_research.md:8
Finding
Prompt-injection style instruction pattern detected.
warn

suspicious.prompt_injection_instructions

Location
examples/llm-as-judge-skills/README.md:169
Finding
Prompt-injection style instruction pattern detected.