Agents
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: agents Version: 1.0.0 The OpenClaw AgentSkills skill bundle is a comprehensive educational resource focused on designing, building, deploying, and securing AI agents. All files, including SKILL.md, architecture.md, evaluation.md, frameworks.md, implementation.md, security.md, and use-cases.md, contain conceptual information, best practices, and illustrative code snippets for agent development. There is no evidence of prompt injection attempts against the OpenClaw agent, data exfiltration, malicious execution, persistence mechanisms, or obfuscation. On the contrary, the 'security.md' file explicitly details agent-specific attack vectors and robust mitigation strategies, demonstrating a strong emphasis on security awareness and responsible AI development.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A scanner may flag the wording, but in context it is a defensive example rather than a hidden directive.
This matches a prompt-injection pattern, but the surrounding section is explicitly an 'Attack Vectors' guide explaining how to recognize and defend against it.
| **Persona hijacking** | "Ignore previous instructions..." | Agent abandons safety constraints |
Keep this text treated as documentation; do not copy the attack phrase into live prompts except as clearly quoted test data.
If you use these patterns in a real agent, stored memories or embeddings may contain private information or untrusted content that affects future behavior.
The skill teaches long-term memory patterns that can store user facts and preferences. This is aligned with agent design guidance, but such memory can become sensitive or poisoned if implemented without controls.
| **Semantic** | Long-term | Facts, learnings, preferences | Vector DB, embeddings |
Scope what memory stores, add user controls for review/deletion, avoid storing secrets, and treat retrieved memory as untrusted context.
