HedgehogMemory

ReviewAudited by ClawScan on May 14, 2026.

Overview

Review recommended: this is a coherent memory library, but it is designed to keep verbatim session history indefinitely and reuse it in future prompts, with optional external LLM summarization.

Install only if you want durable cross-session memory. Set a dedicated HEDGEHOG_MEMORY_PATH, do not commit secrets or sensitive transcripts, avoid external summarizers for confidential work, and plan how you will inspect, back up, redact, or delete origin.json if needed.

Publisher note

Pure Python library for persistent AI agent memory. No network access required unless using OpenAI summarizer (optional env var). Uses only local file storage (origin.json).

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private conversation details, secrets, or misleading instructions could be retained indefinitely and later influence the agent.

Why it was flagged

The skill encourages saving complete session logs permanently and reusing them across sessions. That is core to the product, but it is unbounded persistent memory with no documented deletion, redaction, or per-commit approval controls.

Skill content
Memory is NEVER deleted... The verbatim original is always recoverable at L4... Apply this pattern every session... full_context=full_session_log
Recommendation

Use a dedicated memory path, avoid saving secrets or credentials, review origin.json regularly, and add deletion/redaction and explicit approval controls before broad use.

What this means

Committed session content may leave the local machine if an external summarizer is used.

Why it was flagged

When the optional OpenAI summarizer is configured, the text being summarized is placed into a prompt and sent to the configured model provider. LiteLLM support similarly forwards prompts to the selected backend.

Skill content
resp = self.client.chat.completions.create(model=self.model, messages=[{"role": "user", "content": prompt}], ...)
Recommendation

Use the built-in keyword summarizer or a trusted local provider for sensitive data, and disclose provider use to users whose data may be summarized.

What this means

If configured, the key can incur provider usage and authorize summarization requests.

Why it was flagged

The optional credential is expected for the OpenAI integration and there is no artifact evidence of hardcoding or logging it, but it does grant access to a provider account.

Skill content
OPENAI_API_KEY ... required: false ... OpenAI API key for quality LLM-based summarization
Recommendation

Store the key only in the environment, use least-privilege provider credentials where possible, and rotate it if exposed.

What this means

A crash or interruption during save could corrupt or truncate the memory file despite the documentation claim.

Why it was flagged

The durability claim is not supported by the provided storage implementation, which directly opens origin.json for writing and dumps JSON rather than using a temp file plus atomic replace. This overstates the reliability of the persistent memory store.

Skill content
Single-file storage — all memory in one `origin.json`. Atomic writes, no corruption.
Recommendation

Implement true atomic writes, backups, or recovery checks, or remove the 'atomic/no corruption' claim from the documentation.