Agent Skills Context

ReviewAudited by ClawScan on May 10, 2026.

Overview

Prompt-injection indicators were detected in the submitted artifacts (ignore-previous-instructions, system-prompt-override, unicode-control-chars); human review is required before treating this skill as clean.

This looks suitable as a reference skill for agent-system design. Before running any bundled examples, review the code, especially the eval-based example, and use a sandbox. If you apply the memory, filesystem, or background-agent patterns, set explicit user approval, path limits, retention/deletion rules, logging, and stop controls. ClawScan detected prompt-injection indicators (ignore-previous-instructions, system-prompt-override, unicode-control-chars), so this skill requires review even though the model response was benign.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If you choose to run the bundled code, you may not have a clear upstream source to trust or audit.

Why it was flagged

The package provenance is not clear. This matters because the manifest also shows many bundled scripts/examples, even though there is no automatic install or execution path shown.

Skill content
Source: unknown
Homepage: none
Recommendation

Treat the skill as reference material unless you have reviewed the scripts you intend to execute; run examples in a sandbox and verify provenance if possible.

What this means

Running that example with unsafe input could evaluate unintended Python expressions.

Why it was flagged

The static scan found Python eval in an example file. Even with restricted builtins, eval can be risky if expressions are influenced by untrusted users or model output.

Skill content
result = eval(expression, {"__builtins__": {}}, allowed_names)
Recommendation

Do not run the example on untrusted input; replace eval with a safer expression parser or tightly validated allowlist if adapting the code.

What this means

If you use these patterns without limits, an agent may save or reuse sensitive or incorrect information across tasks.

Why it was flagged

The skill purposefully teaches persistent memory and filesystem-based context patterns. These are coherent with the skill, but persisted context can store private data or poisoned instructions if implemented broadly.

Skill content
Memory architectures range from simple scratchpads to sophisticated temporal knowledge graphs... The file-system-as-memory pattern enables just-in-time context loading
Recommendation

When implementing memory, define allowed paths, retention, deletion controls, review steps, and treat stored memories as untrusted context.

What this means

If implemented without safeguards, background or self-spawned agents could keep working, consuming resources, or changing files after the user expects the task to stop.

Why it was flagged

The skill discusses background agents, persistent snapshots, and self-spawning as hosted-agent infrastructure patterns. This is purpose-aligned guidance, not shown as automatic behavior, but it is a sensitive design area.

Skill content
Background coding agents run in remote sandboxed environments... filesystem snapshots for session persistence... self-spawning agents for parallel task execution
Recommendation

Require explicit user approval for background workers or self-spawning, and add time limits, resource limits, logging, sandboxing, and a clear stop/cleanup mechanism.