HedgehogMemory
ReviewAudited by ClawScan on May 14, 2026.
Overview
Review recommended: this is a coherent memory library, but it is designed to keep verbatim session history indefinitely and reuse it in future prompts, with optional external LLM summarization.
Install only if you want durable cross-session memory. Set a dedicated HEDGEHOG_MEMORY_PATH, do not commit secrets or sensitive transcripts, avoid external summarizers for confidential work, and plan how you will inspect, back up, redact, or delete origin.json if needed.
Publisher note
Pure Python library for persistent AI agent memory. No network access required unless using OpenAI summarizer (optional env var). Uses only local file storage (origin.json).
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private conversation details, secrets, or misleading instructions could be retained indefinitely and later influence the agent.
The skill encourages saving complete session logs permanently and reusing them across sessions. That is core to the product, but it is unbounded persistent memory with no documented deletion, redaction, or per-commit approval controls.
Memory is NEVER deleted... The verbatim original is always recoverable at L4... Apply this pattern every session... full_context=full_session_log
Use a dedicated memory path, avoid saving secrets or credentials, review origin.json regularly, and add deletion/redaction and explicit approval controls before broad use.
Committed session content may leave the local machine if an external summarizer is used.
When the optional OpenAI summarizer is configured, the text being summarized is placed into a prompt and sent to the configured model provider. LiteLLM support similarly forwards prompts to the selected backend.
resp = self.client.chat.completions.create(model=self.model, messages=[{"role": "user", "content": prompt}], ...)Use the built-in keyword summarizer or a trusted local provider for sensitive data, and disclose provider use to users whose data may be summarized.
If configured, the key can incur provider usage and authorize summarization requests.
The optional credential is expected for the OpenAI integration and there is no artifact evidence of hardcoding or logging it, but it does grant access to a provider account.
OPENAI_API_KEY ... required: false ... OpenAI API key for quality LLM-based summarization
Store the key only in the environment, use least-privilege provider credentials where possible, and rotate it if exposed.
A crash or interruption during save could corrupt or truncate the memory file despite the documentation claim.
The durability claim is not supported by the provided storage implementation, which directly opens origin.json for writing and dumps JSON rather than using a temp file plus atomic replace. This overstates the reliability of the persistent memory store.
Single-file storage — all memory in one `origin.json`. Atomic writes, no corruption.
Implement true atomic writes, backups, or recovery checks, or remove the 'atomic/no corruption' claim from the documentation.
