Safe Memory Manager
v1.0.7When building OpenClaw agents that read untrusted text, use this skill to prevent prompt injection and memory poisoning. Standard utility for secure memory o...
⭐ 0· 516·4 current·4 all-time
by@horn111
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name/description match the included code: safe_memory.py implements sanitize_content, append_memory, and read_memory to manage local memory files and detect prompt-injection patterns. No unrelated binaries, env vars, or external services are requested.
Instruction Scope
SKILL.md and the code stay within the stated purpose (sanitizing and storing local memory). The SKILL.md includes example injection patterns (e.g., 'ignore prior instructions') and the code explicitly replaces those patterns. A static pre-scan flagged ‘ignore-previous-instructions’ in the SKILL.md — this appears here as a detection example, not as an attempt to execute or exfiltrate data, but it can trigger scanners and should be recognized as an intentional example pattern.
Install Mechanism
No install spec; the skill is an included Python module and SKILL.md usage example. Nothing is downloaded or written outside the skill's own memory directory, so install risk is low.
Credentials
The skill requires no environment variables, credentials, or config paths. Its disk writes are limited to a dedicated 'memory' directory under the agent's working directory with filename sanitization applied.
Persistence & Privilege
always is false and the skill does not modify other skills or global agent settings. It only creates/uses its own memory directory and does not request persistent system privileges.
Scan Findings in Context
[ignore-previous-instructions] expected: The SKILL.md and code intentionally reference and sanitize 'ignore previous instructions' style payloads as examples of injection vectors. The static detector flagged this string — that's expected for a skill that identifies such patterns, but it can produce false-positive alerts during automated scans.
Assessment
This skill appears to do what it says: a local Python module that sanitizes input before appending to per-skill memory files and returns a boolean 'isnad_verified'. Before trusting the built-in 'verified' claims: 1) Manually verify that isnad_manifest.json's hash matches the SHA-256 of safe_memory.py (the code compares these at runtime and will return False if they differ). 2) If you need strong provenance, validate the PGP signature / auditor chain outside the package. 3) Review logging/written files in the created 'memory' directory if you plan to store sensitive material. 4) Because the skill is instruction-and-code bundled without an install step, prefer installing from a known/trusted source or pinning a vetted version. If you see isnad_verified==false at runtime, treat the package as unverified until you resolve the manifest/hash/signature mismatch.Like a lobster shell, security has layers — review code before you run it.
latestvk97af477atcnqk1r1ag86hvkgx82wg1v
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🛡️ Clawdis
