Code PluginExecutes codesource-linked

๐Ÿง  Supermemory

Local graph-based memory plugin for OpenClaw with entity extraction, user profiles, and automatic forgetting โ€” inspired by Supermemory

Community code plugin. Review compatibility and verification before install.
openclaw-memory-supermemory ยท runtime id openclaw-memory-supermemory
Install
openclaw plugins install clawhub:openclaw-memory-supermemory
Latest Release
Version 0.3.1
Compatibility
{
  "builtWithOpenClawVersion": "2026.3.26",
  "pluginApiRange": ">=2026.3.26"
}
Capabilities
{
  "bundledSkills": [],
  "capabilityTags": [
    "executes-code",
    "kind:memory"
  ],
  "channels": [],
  "commandNames": [],
  "configSchema": true,
  "configUiHints": false,
  "executesCode": true,
  "hooks": [],
  "httpRouteCount": 0,
  "materializesDependencies": false,
  "pluginKind": "memory",
  "providers": [],
  "runtimeId": "openclaw-memory-supermemory",
  "serviceNames": [],
  "setupEntry": false,
  "toolNames": []
}
Security Scan
VirusTotalVirusTotal
Benign
View report โ†’
OpenClawOpenClaw
Benign
high confidence
โ„น
Purpose & Capability
Name/description (graph memory, entity extraction, auto-forgetting) match the code and SKILL.md: the plugin implements extraction, graph storage, search, forgetting, and auto-recall. The only potential mismatch is that the registry metadata lists no required env vars while the SKILL.md and config show an optional embedding.apiKey (e.g., ${OPENAI_API_KEY}) for cloud embedding providers. That is expected (the API key is optional and only needed if you choose a cloud provider).
โ„น
Instruction Scope
Runtime instructions stay within the plugin's purpose: configure the plugin in OpenClaw, choose an embedding provider (local Ollama or OpenAI-compatible remote provider), and let it auto-capture/auto-recall conversation turns. The plugin runs an OpenClaw subagent to extract facts (it sets an extraction system prompt) and writes memories to a local SQLite DB. Important scope notes: auto-capture will process all conversation turns (user and assistant) and pattern extract emails/phones/URLs which will be stored; extraction uses a subagent system prompt (normal for this functionality) and the plugin deletes extraction transcripts on completion where possible.
โœ“
Install Mechanism
No install spec is provided in the manifest (instruction-only install via OpenClaw plugin mechanism). There are no external arbitrary download URLs or extract steps in the package. All network calls in the source are to configurable embedding endpoints (local Ollama default or user-specified OpenAI-compatible baseUrl).
โ„น
Credentials
The plugin does not demand unrelated credentials. It may optionally read environment variables via config placeholders (e.g., ${OPENAI_API_KEY} or custom baseUrl var) if you configure a cloud embedding provider. This is proportionate to its need to compute embeddings. Users should note that choosing a cloud provider causes conversation snippets (or fragments) to be sent to that provider for embedding, which can leak sensitive content to third parties.
โ„น
Persistence & Privilege
The plugin is not force-included (always:false) and is user-invocable. It stores its data in a SQLite DB under ~/.openclaw/memory/supermemory.db by default and runs a background forgetting cycle. This persistent local storage is expected for a memory plugin, but it means captured content (including emails, phone numbers, URLs, and any extracted facts) is persisted locally in plain SQLite (no built-in encryption). Auto-recall injects stored memories into the system prompt before AI turns by default โ€” a legitimate feature but a privacy surface to consider.
Scan Findings in Context
[system-prompt-override] expected: The fact-extraction pipeline uses an OpenClaw subagent and passes a dedicated system prompt (EXTRACTION_SYSTEM_PROMPT) to steer extraction. This is expected behavior for a fact-extraction subagent and explains the scanner's finding.
Assessment
This plugin is internally coherent and implements a local graph memory with optional cloud embeddings; it is not trying to do unrelated actions. Before installing, consider the following in plain terms: - Where embeddings are computed: choose Ollama/local if you want truly local processing. If you configure OpenAI or another cloud provider and set an API key (e.g., OPENAI_API_KEY), portions of your conversations will be sent to that external service for embedding. - Local storage: memories (including emails, phone numbers, URLs, and extracted facts) are kept in an SQLite DB at ~/.openclaw/memory/supermemory.db by default and are not encrypted by the plugin. Treat that file as sensitive and back it up / restrict file permissions if needed. - Auto-capture / auto-recall: defaults inject stored memories into AI context automatically. If you are concerned about accidental capture or recall of secrets, disable autoCapture or autoRecall in the plugin config, reduce captureMaxChars, or set captureMode to "off". - Sensitive data: the entity extraction explicitly recognizes emails, phone numbers, and URLs and will store them. Avoid sharing secrets in chat or configure filters to prevent storing secrets. - Cleanup: the CLI exposes a wipe command (requires --confirm) to delete memories; you can also adjust forgetting/decay settings. - Review configuration: when adding the plugin to OpenClaw, ensure you understand any ${ENV_VAR} placeholders in your config (parse will throw if you use that placeholder but don't set the env var). If you want extra assurance, audit the plugin source (present in the package) or run it in a sandboxed environment and prefer local embedding models.
Verification
{
  "hasProvenance": false,
  "scanStatus": "clean",
  "scope": "artifact-only",
  "sourceCommit": "https://github.com/ivanvmoreno/supermemory-openclaw/commit/c71f33c7ad8653a07167033c8d964895879996d8",
  "sourceRepo": "ivanvmoreno/supermemory-openclaw",
  "sourceTag": "https://github.com/ivanvmoreno/supermemory-openclaw/releases/tag/v0.3.1",
  "summary": "Validated package structure and linked the release to source metadata.",
  "tier": "source-linked"
}
Tags
{
  "latest": "0.3.1"
}

๐Ÿง  Supermemory OpenClaw Plugin

Local graph-based memory plugin for OpenClaw โ€” inspired by Supermemory. Runs entirely on your machine with no cloud dependencies.

Disclaimer: This is an independent project. It is not affiliated with, endorsed by, or maintained by the Supermemory team. The name reflects architectural inspiration, not a partnership.

Features

  • LLM Fact Extraction โ€” Extracts discrete, entity-centric facts from each conversation turn via an LLM subagent, matching Supermemory's cloud approach locally.
  • Graph Memory โ€” Automatic entity extraction, relationship tracking (Updates / Extends / Derives), memory versioning with parent_memory_id chains.
  • User Profiles โ€” Static long-term facts + dynamic recent context, automatically maintained and injected into system prompt. Static memories (is_static) are protected from decay.
  • Automatic Forgetting โ€” Temporal expiration for time-bound facts (including absolute dates like "January 15"), decay for low-importance unused memories, contradiction resolution.
  • Hybrid Search โ€” BM25 keyword (FTS5) + graph-augmented multi-hop retrieval with MMR diversity re-ranking. Superseded memories are filtered at the query level. Vector similarity (sqlite-vec) used when available.
  • Auto-Recall โ€” Injects relevant memories + user profile before every AI turn via the before_prompt_build hook.
  • OpenClaw Runtime Integration โ€” Registers memory tools, a built-in memory search manager, and a pre-compaction memory flush plan when the host API supports them.

How It Works

flowchart LR
    subgraph input ["๐Ÿ’ฌ Conversation"]
        A[User message] --> B[AI response]
    end

    subgraph extract ["๐Ÿง  Memory Engine"]
        C[Extract discrete facts via LLM]
        C --> D[Deduplicate]
        D --> E[Classify & embed]
    end

    subgraph graph ["๐Ÿ”— Knowledge Graph"]
        F["Link entities\n(people, projects)"]
        F --> G{Relationship detection}
        G --> H["๐Ÿ”„ Updates โ€” new fact\nsupersedes old"]
        G --> I["โž• Extends โ€” enriches\nexisting fact"]
        G --> J["๐Ÿ”ฎ Derives โ€” inferred\nconnection"]
    end

    subgraph recall ["๐Ÿ”Ž Recall"]
        K["User Profile\n(static + dynamic facts)"]
        L["Hybrid Search\n(vector + keyword + graph)"]
        K --> M[Inject into next AI turn]
        L --> M
    end

    B --> C
    E --> F
    J --> K
    H --> K
    I --> K
  1. You talk to your AI normally. Share preferences, mention projects, discuss problems.
  2. Auto-capture uses your configured LLM to extract discrete facts from the last conversation turn (both user and assistant messages).
  3. Graph engine links each extracted fact to entities and detects relationships:
    • Updates โ€” "Ivรกn moved to Copenhagen" supersedes "Ivรกn lives in Madrid"
    • Extends โ€” "Ivรกn leads a research team of 4" enriches "Ivรกn is an AI Scientist at Santander"
    • Derives โ€” Inferred connections from shared entities
  4. Auto-recall injects your user profile + relevant memories before each AI turn.
  5. Automatic forgetting cleans up expired time-bound facts and decays unused low-importance memories.

Quick Start

Step 1: Install the plugin

openclaw plugins install openclaw-memory-supermemory

Step 2: Configure OpenClaw

Edit ~/.openclaw/openclaw.json and add both the memory slot and the plugin entry:

{
  plugins: {
    // REQUIRED: Assign this plugin to the memory slot
    slots: {
      memory: "openclaw-memory-supermemory"
    },
    // RECOMMENDED: Suppress the auto-load security warning
    allow: ["openclaw-memory-supermemory"],
    // Plugin configuration
    entries: {
      "openclaw-memory-supermemory": {
        enabled: true,
        config: {
          embedding: {
            provider: "openai",
            model: "text-embedding-3-small",
            apiKey: "${OPENAI_API_KEY}"    // reads from env var
          },
          autoRecall: true,
          autoCapture: true
        }
      }
    }
  }
}

Important: The slots.memory line is required โ€” without it, OpenClaw won't use the plugin even if it's installed.

Step 3: Restart OpenClaw

Restart the OpenClaw gateway for the plugin to load.

Step 4: Verify it works

openclaw supermemory stats

You should see output like:

Total memories:      0
Active memories:     0
Superseded memories: 0
Entities:            0
Relationships:       0
Vector search:       unavailable

Zero counts are normal on first run. Vector search: unavailable is expected โ€” see Vector Search below.

Embedding Providers

You need an embedding provider for semantic search. Choose one:

OpenAI (recommended for simplicity)

embedding: {
  provider: "openai",
  model: "text-embedding-3-small",
  apiKey: "${OPENAI_API_KEY}"
}

Set the environment variable before starting OpenClaw:

export OPENAI_API_KEY="sk-..."

Ollama (fully local, no API key)

Install Ollama and pull a model:

ollama pull nomic-embed-text
embedding: {
  provider: "ollama",
  model: "nomic-embed-text"
}

Other OpenAI-compatible providers

Any provider with an OpenAI-compatible /v1/embeddings endpoint works:

embedding: {
  provider: "openai",
  model: "your-model-name",
  apiKey: "${YOUR_API_KEY}",
  baseUrl: "https://your-provider.com/v1"
}

Supported models (auto-detected dimensions)

ModelProviderDimensions
nomic-embed-textOllama768
text-embedding-3-smallOpenAI1536
text-embedding-3-largeOpenAI3072
mxbai-embed-largeOllama1024
all-minilmOllama384
snowflake-arctic-embedOllama1024

For other models, set embedding.dimensions explicitly.

AI Tools

The AI uses these tools autonomously:

ToolDescription
memory_searchHybrid search across all memories (vector + keyword + graph)
memory_storeSave information with automatic entity extraction, relationship detection, and optional isStatic flag for permanent facts
memory_forgetDelete memories by ID or search query
memory_profileView/rebuild the automatically maintained user profile

CLI Commands

openclaw supermemory stats              # Show memory statistics
openclaw supermemory search <query>     # Search memories
openclaw supermemory search "rust" --limit 5
openclaw supermemory profile            # View user profile
openclaw supermemory profile --rebuild  # Force rebuild profile
openclaw supermemory wipe --confirm     # Delete all memories

Verifying Memories

After chatting with the AI, you can verify memories are being captured:

# Check memory counts increased
openclaw supermemory stats

# Search for something you mentioned
openclaw supermemory search "your topic"

# View your auto-built profile
openclaw supermemory profile

Vector Search

The plugin uses FTS5 keyword search + graph traversal by default. Vector similarity search requires sqlite-vec, which is bundled with OpenClaw's built-in memory system but not automatically available to external plugins.

If your OpenClaw build includes sqlite-vec, the plugin will detect and use it automatically.

Troubleshooting

"plugins.allow is empty" warning

Suppress it by adding:

plugins: {
  allow: ["openclaw-memory-supermemory"]
}

Configuration Reference

OptionTypeDefaultDescription
embedding.providerstring"ollama"Embedding provider (ollama, openai, etc.)
embedding.modelstring"nomic-embed-text"Embedding model name
embedding.apiKeystringโ€”API key (cloud providers only, supports ${ENV_VAR} syntax)
embedding.baseUrlstringโ€”Custom API base URL
embedding.dimensionsnumberautoVector dimensions (auto-detected for known models)
autoCapturebooleantrueAuto-capture memories from conversations
captureModestring"extract""extract" (LLM fact extraction) or "off" (disable auto-capture)
autoRecallbooleantrueAuto-inject memories + profile into context
profileFrequencynumber50Rebuild user profile every N interactions
entityExtractionstring"pattern"Current implementation is pattern-based. "llm" is reserved and currently behaves the same as "pattern".
forgetExpiredIntervalMinutesnumber60Minutes between forgetting cleanup runs
temporalDecayDaysnumber90Days before low-importance unused memories decay
maxRecallResultsnumber10Max memories injected per auto-recall
vectorWeightnumber0.5Weight for vector similarity in hybrid search
textWeightnumber0.3Weight for BM25 keyword search
graphWeightnumber0.2Weight for graph-augmented retrieval
dbPathstring~/.openclaw/memory/supermemory.dbSQLite database path
captureMaxCharsnumber2000Max message length for auto-capture
debugbooleanfalseEnable verbose logging

Fact Extraction

By default, the plugin uses your configured LLM to extract discrete, entity-centric facts from each conversation turn.

Input conversation:

"Caught up with Ivรกn today. He's working at Santander as an AI Scientist now, doing research on knowledge graphs. He lives in Madrid and mentioned a deadline next Tuesday for a paper submission."

Extracted memories:

  • Ivรกn works at Santander as an AI Scientist
  • Ivรกn researches knowledge graphs
  • Ivรกn lives in Madrid
  • Ivรกn has a paper submission deadline next Tuesday

Each fact is stored as a separate memory with automatic entity linking, relationship detection (Updates/Extends/Derives), and temporal expiration.

Set captureMode: "off" to disable auto-capture entirely.

Architecture

openclaw-memory-supermemory/
โ”œโ”€โ”€ index.ts                    # Plugin entry
โ”œโ”€โ”€ openclaw.plugin.json        # Plugin manifest (kind: "memory")
โ”œโ”€โ”€ tests/
โ”‚   โ””โ”€โ”€ integration/
โ”‚       โ””โ”€โ”€ longmemeval/
โ”‚           โ”œโ”€โ”€ fixtures/       # Bundled LongMemEval test artifacts
โ”‚           โ”œโ”€โ”€ README.md       # Test layout and artifact notes
โ”‚           โ””โ”€โ”€ run.ts          # Local OpenClaw integration battery / benchmark runner
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ config.ts               # Config parsing + defaults
โ”‚   โ”œโ”€โ”€ db.ts                   # SQLite: memories, entities, relationships, profiles
โ”‚   โ”œโ”€โ”€ embeddings.ts           # Ollama + OpenAI-compatible embedding providers
โ”‚   โ”œโ”€โ”€ fact-extractor.ts        # LLM fact extraction via OpenClaw subagent
โ”‚   โ”œโ”€โ”€ graph-engine.ts         # Entity extraction, relationship detection, temporal parsing
โ”‚   โ”œโ”€โ”€ memory-text.ts          # Injected/synthetic memory filtering and prompt-safe sanitization
โ”‚   โ”œโ”€โ”€ search.ts               # Hybrid search (vector + FTS5 + graph)
โ”‚   โ”œโ”€โ”€ profile-builder.ts      # Static + dynamic user profile
โ”‚   โ”œโ”€โ”€ forgetting.ts           # Temporal decay, expiration, cleanup
โ”‚   โ”œโ”€โ”€ tools.ts                # Agent tools (search, store, forget, profile)
โ”‚   โ”œโ”€โ”€ hooks.ts                # Auto-recall + guarded auto-capture hooks
โ”‚   โ””โ”€โ”€ cli.ts                  # CLI commands

Storage

All data stored in a single SQLite database:

  • memories โ€” Text, embeddings, importance, category, expiration, access tracking, is_static, parent_memory_id
  • entities โ€” Extracted entities (people, projects, tech, emails, URLs)
  • entity_mentions โ€” Links between memories and entities
  • relationships โ€” Graph edges (updates / extends / derives)
  • profile_cache โ€” Cached static + dynamic user profile
  • memories_fts โ€” FTS5 virtual table for keyword search
  • memories_vec โ€” sqlite-vec virtual table for vector similarity (when available)

LongMemEval Integration

The repo includes a LongMemEval runner that evaluates this plugin through a real local OpenClaw agent invocation while keeping benchmark state isolated from your normal ~/.openclaw profile.

# One example per main LongMemEval category + one abstention case
bun run test:integration:longmemeval

# Run the whole bundled oracle fixture
bun run test:integration:longmemeval --preset full

# Run the official LongMemEval evaluator afterwards
bun run test:integration:longmemeval --run-official-eval --official-repo /tmp/LongMemEval

The runner auto-loads repo-root .env.local and .env before reading env defaults. Start from .env.sample. The only supported runner env defaults are LONGMEMEVAL_SOURCE_STATE_DIR and LONGMEMEVAL_OFFICIAL_REPO.

What the runner does:

  1. Uses the bundled oracle fixture by default, or a file passed via --data-file
  2. Creates an isolated ~/.openclaw-<profile> profile
  3. Copies auth and model metadata from LONGMEMEVAL_SOURCE_STATE_DIR (default: ~/.openclaw)
  4. Imports each benchmark instance into a fresh plugin DB
  5. Asks the benchmark question through openclaw agent --local
  6. Writes a predictions.jsonl file plus a run summary JSON