๐ง Supermemory
Local graph-based memory plugin for OpenClaw with entity extraction, user profiles, and automatic forgetting โ inspired by Supermemory
Install
openclaw plugins install clawhub:openclaw-memory-supermemoryLatest Release
Compatibility
{
"builtWithOpenClawVersion": "2026.3.26",
"pluginApiRange": ">=2026.3.26"
}Capabilities
{
"bundledSkills": [],
"capabilityTags": [
"executes-code",
"kind:memory"
],
"channels": [],
"commandNames": [],
"configSchema": true,
"configUiHints": false,
"executesCode": true,
"hooks": [],
"httpRouteCount": 0,
"materializesDependencies": false,
"pluginKind": "memory",
"providers": [],
"runtimeId": "openclaw-memory-supermemory",
"serviceNames": [],
"setupEntry": false,
"toolNames": []
}Verification
{
"hasProvenance": false,
"scanStatus": "clean",
"scope": "artifact-only",
"sourceCommit": "https://github.com/ivanvmoreno/supermemory-openclaw/commit/c71f33c7ad8653a07167033c8d964895879996d8",
"sourceRepo": "ivanvmoreno/supermemory-openclaw",
"sourceTag": "https://github.com/ivanvmoreno/supermemory-openclaw/releases/tag/v0.3.1",
"summary": "Validated package structure and linked the release to source metadata.",
"tier": "source-linked"
}Tags
{
"latest": "0.3.1"
}๐ง Supermemory OpenClaw Plugin
Local graph-based memory plugin for OpenClaw โ inspired by Supermemory. Runs entirely on your machine with no cloud dependencies.
Disclaimer: This is an independent project. It is not affiliated with, endorsed by, or maintained by the Supermemory team. The name reflects architectural inspiration, not a partnership.
Features
- LLM Fact Extraction โ Extracts discrete, entity-centric facts from each conversation turn via an LLM subagent, matching Supermemory's cloud approach locally.
- Graph Memory โ Automatic entity extraction, relationship tracking (Updates / Extends / Derives), memory versioning with
parent_memory_idchains. - User Profiles โ Static long-term facts + dynamic recent context, automatically maintained and injected into system prompt. Static memories (
is_static) are protected from decay. - Automatic Forgetting โ Temporal expiration for time-bound facts (including absolute dates like "January 15"), decay for low-importance unused memories, contradiction resolution.
- Hybrid Search โ BM25 keyword (FTS5) + graph-augmented multi-hop retrieval with MMR diversity re-ranking. Superseded memories are filtered at the query level. Vector similarity (sqlite-vec) used when available.
- Auto-Recall โ Injects relevant memories + user profile before every AI turn via the
before_prompt_buildhook. - OpenClaw Runtime Integration โ Registers memory tools, a built-in memory search manager, and a pre-compaction memory flush plan when the host API supports them.
How It Works
flowchart LR
subgraph input ["๐ฌ Conversation"]
A[User message] --> B[AI response]
end
subgraph extract ["๐ง Memory Engine"]
C[Extract discrete facts via LLM]
C --> D[Deduplicate]
D --> E[Classify & embed]
end
subgraph graph ["๐ Knowledge Graph"]
F["Link entities\n(people, projects)"]
F --> G{Relationship detection}
G --> H["๐ Updates โ new fact\nsupersedes old"]
G --> I["โ Extends โ enriches\nexisting fact"]
G --> J["๐ฎ Derives โ inferred\nconnection"]
end
subgraph recall ["๐ Recall"]
K["User Profile\n(static + dynamic facts)"]
L["Hybrid Search\n(vector + keyword + graph)"]
K --> M[Inject into next AI turn]
L --> M
end
B --> C
E --> F
J --> K
H --> K
I --> K
- You talk to your AI normally. Share preferences, mention projects, discuss problems.
- Auto-capture uses your configured LLM to extract discrete facts from the last conversation turn (both user and assistant messages).
- Graph engine links each extracted fact to entities and detects relationships:
- Updates โ "Ivรกn moved to Copenhagen" supersedes "Ivรกn lives in Madrid"
- Extends โ "Ivรกn leads a research team of 4" enriches "Ivรกn is an AI Scientist at Santander"
- Derives โ Inferred connections from shared entities
- Auto-recall injects your user profile + relevant memories before each AI turn.
- Automatic forgetting cleans up expired time-bound facts and decays unused low-importance memories.
Quick Start
Step 1: Install the plugin
openclaw plugins install openclaw-memory-supermemory
Step 2: Configure OpenClaw
Edit ~/.openclaw/openclaw.json and add both the memory slot and the plugin entry:
{
plugins: {
// REQUIRED: Assign this plugin to the memory slot
slots: {
memory: "openclaw-memory-supermemory"
},
// RECOMMENDED: Suppress the auto-load security warning
allow: ["openclaw-memory-supermemory"],
// Plugin configuration
entries: {
"openclaw-memory-supermemory": {
enabled: true,
config: {
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}" // reads from env var
},
autoRecall: true,
autoCapture: true
}
}
}
}
}
Important: The
slots.memoryline is required โ without it, OpenClaw won't use the plugin even if it's installed.
Step 3: Restart OpenClaw
Restart the OpenClaw gateway for the plugin to load.
Step 4: Verify it works
openclaw supermemory stats
You should see output like:
Total memories: 0
Active memories: 0
Superseded memories: 0
Entities: 0
Relationships: 0
Vector search: unavailable
Zero counts are normal on first run. Vector search: unavailable is expected โ see Vector Search below.
Embedding Providers
You need an embedding provider for semantic search. Choose one:
OpenAI (recommended for simplicity)
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}"
}
Set the environment variable before starting OpenClaw:
export OPENAI_API_KEY="sk-..."
Ollama (fully local, no API key)
Install Ollama and pull a model:
ollama pull nomic-embed-text
embedding: {
provider: "ollama",
model: "nomic-embed-text"
}
Other OpenAI-compatible providers
Any provider with an OpenAI-compatible /v1/embeddings endpoint works:
embedding: {
provider: "openai",
model: "your-model-name",
apiKey: "${YOUR_API_KEY}",
baseUrl: "https://your-provider.com/v1"
}
Supported models (auto-detected dimensions)
| Model | Provider | Dimensions |
|---|---|---|
nomic-embed-text | Ollama | 768 |
text-embedding-3-small | OpenAI | 1536 |
text-embedding-3-large | OpenAI | 3072 |
mxbai-embed-large | Ollama | 1024 |
all-minilm | Ollama | 384 |
snowflake-arctic-embed | Ollama | 1024 |
For other models, set embedding.dimensions explicitly.
AI Tools
The AI uses these tools autonomously:
| Tool | Description |
|---|---|
memory_search | Hybrid search across all memories (vector + keyword + graph) |
memory_store | Save information with automatic entity extraction, relationship detection, and optional isStatic flag for permanent facts |
memory_forget | Delete memories by ID or search query |
memory_profile | View/rebuild the automatically maintained user profile |
CLI Commands
openclaw supermemory stats # Show memory statistics
openclaw supermemory search <query> # Search memories
openclaw supermemory search "rust" --limit 5
openclaw supermemory profile # View user profile
openclaw supermemory profile --rebuild # Force rebuild profile
openclaw supermemory wipe --confirm # Delete all memories
Verifying Memories
After chatting with the AI, you can verify memories are being captured:
# Check memory counts increased
openclaw supermemory stats
# Search for something you mentioned
openclaw supermemory search "your topic"
# View your auto-built profile
openclaw supermemory profile
Vector Search
The plugin uses FTS5 keyword search + graph traversal by default. Vector similarity search requires sqlite-vec, which is bundled with OpenClaw's built-in memory system but not automatically available to external plugins.
If your OpenClaw build includes sqlite-vec, the plugin will detect and use it automatically.
Troubleshooting
"plugins.allow is empty" warning
Suppress it by adding:
plugins: {
allow: ["openclaw-memory-supermemory"]
}
Configuration Reference
| Option | Type | Default | Description |
|---|---|---|---|
embedding.provider | string | "ollama" | Embedding provider (ollama, openai, etc.) |
embedding.model | string | "nomic-embed-text" | Embedding model name |
embedding.apiKey | string | โ | API key (cloud providers only, supports ${ENV_VAR} syntax) |
embedding.baseUrl | string | โ | Custom API base URL |
embedding.dimensions | number | auto | Vector dimensions (auto-detected for known models) |
autoCapture | boolean | true | Auto-capture memories from conversations |
captureMode | string | "extract" | "extract" (LLM fact extraction) or "off" (disable auto-capture) |
autoRecall | boolean | true | Auto-inject memories + profile into context |
profileFrequency | number | 50 | Rebuild user profile every N interactions |
entityExtraction | string | "pattern" | Current implementation is pattern-based. "llm" is reserved and currently behaves the same as "pattern". |
forgetExpiredIntervalMinutes | number | 60 | Minutes between forgetting cleanup runs |
temporalDecayDays | number | 90 | Days before low-importance unused memories decay |
maxRecallResults | number | 10 | Max memories injected per auto-recall |
vectorWeight | number | 0.5 | Weight for vector similarity in hybrid search |
textWeight | number | 0.3 | Weight for BM25 keyword search |
graphWeight | number | 0.2 | Weight for graph-augmented retrieval |
dbPath | string | ~/.openclaw/memory/supermemory.db | SQLite database path |
captureMaxChars | number | 2000 | Max message length for auto-capture |
debug | boolean | false | Enable verbose logging |
Fact Extraction
By default, the plugin uses your configured LLM to extract discrete, entity-centric facts from each conversation turn.
Input conversation:
"Caught up with Ivรกn today. He's working at Santander as an AI Scientist now, doing research on knowledge graphs. He lives in Madrid and mentioned a deadline next Tuesday for a paper submission."
Extracted memories:
- Ivรกn works at Santander as an AI Scientist
- Ivรกn researches knowledge graphs
- Ivรกn lives in Madrid
- Ivรกn has a paper submission deadline next Tuesday
Each fact is stored as a separate memory with automatic entity linking, relationship detection (Updates/Extends/Derives), and temporal expiration.
Set captureMode: "off" to disable auto-capture entirely.
Architecture
openclaw-memory-supermemory/
โโโ index.ts # Plugin entry
โโโ openclaw.plugin.json # Plugin manifest (kind: "memory")
โโโ tests/
โ โโโ integration/
โ โโโ longmemeval/
โ โโโ fixtures/ # Bundled LongMemEval test artifacts
โ โโโ README.md # Test layout and artifact notes
โ โโโ run.ts # Local OpenClaw integration battery / benchmark runner
โโโ src/
โ โโโ config.ts # Config parsing + defaults
โ โโโ db.ts # SQLite: memories, entities, relationships, profiles
โ โโโ embeddings.ts # Ollama + OpenAI-compatible embedding providers
โ โโโ fact-extractor.ts # LLM fact extraction via OpenClaw subagent
โ โโโ graph-engine.ts # Entity extraction, relationship detection, temporal parsing
โ โโโ memory-text.ts # Injected/synthetic memory filtering and prompt-safe sanitization
โ โโโ search.ts # Hybrid search (vector + FTS5 + graph)
โ โโโ profile-builder.ts # Static + dynamic user profile
โ โโโ forgetting.ts # Temporal decay, expiration, cleanup
โ โโโ tools.ts # Agent tools (search, store, forget, profile)
โ โโโ hooks.ts # Auto-recall + guarded auto-capture hooks
โ โโโ cli.ts # CLI commands
Storage
All data stored in a single SQLite database:
- memories โ Text, embeddings, importance, category, expiration, access tracking,
is_static,parent_memory_id - entities โ Extracted entities (people, projects, tech, emails, URLs)
- entity_mentions โ Links between memories and entities
- relationships โ Graph edges (updates / extends / derives)
- profile_cache โ Cached static + dynamic user profile
- memories_fts โ FTS5 virtual table for keyword search
- memories_vec โ sqlite-vec virtual table for vector similarity (when available)
LongMemEval Integration
The repo includes a LongMemEval runner that evaluates this plugin through a real local OpenClaw agent invocation while keeping benchmark state isolated from your normal ~/.openclaw profile.
# One example per main LongMemEval category + one abstention case
bun run test:integration:longmemeval
# Run the whole bundled oracle fixture
bun run test:integration:longmemeval --preset full
# Run the official LongMemEval evaluator afterwards
bun run test:integration:longmemeval --run-official-eval --official-repo /tmp/LongMemEval
The runner auto-loads repo-root .env.local and .env before reading env defaults. Start from .env.sample. The only supported runner env defaults are LONGMEMEVAL_SOURCE_STATE_DIR and LONGMEMEVAL_OFFICIAL_REPO.
What the runner does:
- Uses the bundled oracle fixture by default, or a file passed via
--data-file - Creates an isolated
~/.openclaw-<profile>profile - Copies auth and model metadata from
LONGMEMEVAL_SOURCE_STATE_DIR(default:~/.openclaw) - Imports each benchmark instance into a fresh plugin DB
- Asks the benchmark question through
openclaw agent --local - Writes a
predictions.jsonlfile plus a run summary JSON
