Memory Guard
SuspiciousAudited by ClawScan on May 10, 2026.
Overview
Memory Guard fits its stated purpose, but its shell script has flaws that can falsely trust memory files or execute injected code from crafted file names or registry entries.
Review the script before installing. If you use it, protect the .memory-guard directory separately, ensure python3 and hashing tools are installed, avoid running it on untrusted workspaces or crafted file names, and do not rely on the provenance stamp until it is fixed.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A malicious workspace or file name could make the guard run code with the user's local privileges.
The command interpolates a user-supplied file path directly into Python source instead of passing it as data. Crafted file names or poisoned registry entries containing quotes/newlines could cause unintended Python code execution when the tool is invoked.
python3 -c "
import json
d = json.load(open('$HASH_FILE'))
if '$target' in d:
d['$target']['hash'] = '$h' ..."Pass file names and paths to Python via argv or environment variables, use JSON escaping, and avoid constructing executable code from file names or registry content.
The tool can give users a false sense that memory files were verified when the required runtime is missing.
Verification depends on python3 output, while the registry metadata declares no required binaries. If python3 is absent, the loop may check no files and still reach the clean-result path.
done < <(python3 -c "
import json
for k in json.load(open('$HASH_FILE')):
print(k)
" 2>/dev/null)
...
echo "All files verified. No tampering detected."Declare python3 and hashing utilities as required binaries, fail closed when dependencies are missing, and treat zero checked files as an error.
Someone who can modify the workspace may be able to alter both memory files and the baseline, causing the guard to trust tampered content.
The persistent hash registry is stored in the same workspace and then used as the source of truth for future verification. No signing, read-only storage, or separate protection is shown.
GUARD_DIR="${MEMORY_GUARD_DIR:-.memory-guard}"
HASH_FILE="$GUARD_DIR/hashes.json"
...
d = json.load(open('$HASH_FILE'))Store the baseline outside the protected workspace, sign it, commit it to a reviewed repository, or otherwise protect .memory-guard from the same writers being monitored.
Users and agents may believe memory entries have provenance when they do not, weakening the skill's integrity claims.
The stamp command records that provenance was added, but the actual stamp string is empty, so it only prepends a blank line rather than the promised provenance header.
local stamp="" ... echo "$stamp" > "$tmp" cat "$file" >> "$tmp" ... log_action "Stamped $file with provenance (confidence=$confidence)"
Implement the documented provenance header, include the rationale/confidence/timestamp values, and verify the file actually contains the stamp before logging success.
Future agent sessions may stop or refuse workspace work after normal edits until a human reviews and accepts the changes.
The skill intentionally asks users to add a startup rule that changes the agent's stopping condition. This is aligned with memory-integrity monitoring, but it makes the guard authoritative over future session flow.
Before reading any workspace files, run memory-guard verify. If any critical file (SOUL.md, AGENTS.md) fails verification, STOP and alert human.
Only add this startup rule if you want that behavior, and keep a documented human review process for accepting legitimate memory-file changes.
