Install
openclaw skills install hedgehog-memoryRadial memory architecture for AI agents — infinite persistent memory with hierarchical compression. Never deletes, only compresses. Origin always in context. pip install hedgehog-memory.
openclaw skills install hedgehog-memoryHedgehogMemory gives AI agents infinite persistent memory using a radial compression architecture. Memory is organized as Lines of Nodes — each Node stores the same content at 5 abstraction levels (L0–L4). The L0 one-liner of every node is always loaded at session start (~200 tokens total), so the agent always knows what it knows.
Key guarantee: Memory is NEVER deleted. Old context is only compressed into smaller abstractions. The verbatim original is always recoverable at L4.
pip install hedgehog-memory
pip install "hedgehog-memory[openai]" # with OpenAI summarizer (recommended)
| Level | Max length | Use case |
|---|---|---|
| L0 | 80 chars | One-liner, always in context |
| L1 | 200 chars | Navigation preview |
| L2 | 600 chars | Detailed summary |
| L3 | 1800 chars | Full context summary |
| L4 | unlimited | Verbatim original |
from radial_memory import ContextWindowManager
import os
mgr = ContextWindowManager(
base_path=os.environ.get("HEDGEHOG_MEMORY_PATH", "./memory_store")
)
# SESSION START: get ~200-token origin overview (all L0 summaries)
overview = mgr.reset()
print(overview) # inject this into your system prompt
# LOAD: find relevant past context by query
result = mgr.load("Python async patterns")
if result.found:
print(result.content) # L1 summary by default
result = result.drill_deeper() # go to L2
full = result.load_full_state() # get verbatim original (L4)
# COMMIT: save current session to memory
mgr.commit(
topic="Async Python debugging session",
full_context="Complete session transcript goes here...",
tags=["python", "async", "debugging"]
)
from radial_memory import ContextWindowManager
from radial_memory.summarizer import OpenAISummarizer
import os
summarizer = OpenAISummarizer(
api_key=os.environ["OPENAI_API_KEY"],
model="gpt-4o-mini"
)
mgr = ContextWindowManager(
base_path=os.environ.get("HEDGEHOG_MEMORY_PATH", "./memory_store"),
summarizer=summarizer
)
Apply this pattern every session:
# 1. SESSION START
overview = mgr.reset()
# overview = all L0 one-liners for every stored node (~200 tokens)
# Inject overview into your system prompt / context window
# 2. QUERY - find relevant past context
result = mgr.load(query=user_request)
if result.found:
context = result.content # L1 summary, ~200 chars
# Need more detail?
result = result.drill_deeper() # L2, ~600 chars
result = result.drill_deeper() # L3, ~1800 chars
full = result.load_full_state() # L4, verbatim original
# 3. WORK - perform task with full context available
# 4. COMMIT - persist session to memory
mgr.commit(
topic="Brief description of this session",
full_context=full_session_log,
tags=["topic1", "topic2"]
)
report = mgr.status_report()
# Returns: total lines, total nodes, last commit timestamp
print(report)
origin.json. Atomic writes, no corruption.pip install hedgehog-memory