Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Memento

v0.6.0

Local persistent memory for OpenClaw agents. Captures conversations, extracts structured facts via LLM, and auto-recalls relevant knowledge before each turn....

0· 836·1 current·1 all-time
byBenjamin RAIBAUD@braibaud

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for braibaud/memento.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Memento" (braibaud/memento) from ClawHub.
Skill page: https://clawhub.ai/braibaud/memento
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install memento

ClawHub CLI

Package manager switcher

npx clawhub@latest install memento
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (local persistent memory + LLM extraction) match the code, data paths (~/.engram/conversations.sqlite and JSONL backups), optional LLM API keys, and an npm install. No unrelated credentials, binaries, or unexpected system paths are required.
Instruction Scope
Runtime instructions and SKILL.md are explicit about behavior: capture every conversation, store locally, and only send text to an external LLM when `autoExtract` (opt-in) is enabled. Migration tooling can read user workspaces (via a user-provided migration-config.json or MEMENTO_WORKSPACE_MAIN) to bootstrap the KB — this is powerful and documented, but it means large local files (including potential secrets) can be ingested if the user opts into migration. The docs warn about cloud LLM leakage and recommend local Ollama for air-gapped operation.
Install Mechanism
Install uses an npm package (@openclaw/memento). That is proportionate for a TypeScript/Node plugin. Package files (package.json, package-lock.json, source files) are present; no arbitrary external download URLs or opaque extract steps are declared in SKILL.md.
Credentials
No required env vars; optional env vars map directly to supported LLM providers (ANTHROPIC_API_KEY, OPENAI_API_KEY, MISTRAL_API_KEY, MEMENTO_API_KEY) and migration settings. CLAUDE_CODE_OAUTH_TOKEN is listed as an OpenClaw internal token that may be auto-used when running inside OpenClaw — this is expected for a plugin that delegates model routing to the host, but users should know platform-level tokens may be consulted when Memento runs inside OpenClaw.
Persistence & Privilege
always:false and user-invocable:true. The plugin stores data locally and registers capture/recall hooks (normal for a memory plugin). It does not demand always-on inclusion or system-wide config changes beyond its own data files.
Assessment
What to consider before installing: - Defaults are privacy-first, but extraction that sends text to cloud LLMs is opt-in (extraction.autoExtract defaults to false). Keep autoExtract off if you do not want any conversation text sent to external providers. - If you want fully air-gapped operation, run a local Ollama model and configure extractionModel to an ollama/* model; no cloud API key needed. - Migration is powerful: the migrate tooling can read workspace files specified in ~/.engram/migration-config.json or via MEMENTO_WORKSPACE_MAIN. Only run migration if you trust the configured paths and have reviewed which files will be ingested (these can include large or sensitive local files). - Data is stored at ~/.engram/conversations.sqlite and ~/.engram/segments/*.jsonl — inspect, back up, or encrypt these files if needed. - The plugin delegates model routing to OpenClaw when run inside the platform and may use platform tokens (CLAUDE_CODE_OAUTH_TOKEN) for routing; verify your OpenClaw auth policy if you want to limit which models/providers are used. - The install is via npm (@openclaw/memento). As with any third-party package, consider reviewing the package source or installing in a sandbox before granting it access to production workspaces. Confidence note: High — the repository, SKILL.md, and changelog are internally consistent. The main risks are user-configured behaviors (enabling autoExtract or running migration) rather than silent or unexpected access.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis

Install

Install Memento plugin (npm)npm i -g @openclaw/memento
latestvk977xng613wpxc99800v76ywv9822gf8
836downloads
0stars
9versions
Updated 14h ago
v0.6.0
MIT-0

Memento — Local Persistent Memory for OpenClaw Agents

Memento gives your agents long-term memory. It captures conversations, extracts structured facts using an LLM, and auto-injects relevant knowledge before each AI turn.

All stored data stays on your machine — no cloud sync, no subscriptions. Extraction uses your configured LLM provider; use a local model (Ollama) for fully air-gapped operation.

⚠️ Privacy note: When autoExtract is enabled, conversation segments are sent to your configured LLM provider for fact extraction. If you use a cloud provider (Anthropic, OpenAI, Mistral), that text leaves your machine. For fully local operation, set extractionModel to ollama/<model> and keep Ollama running locally.

What It Does

  1. Captures every conversation turn, buffered per session
  2. Extracts structured facts (preferences, decisions, people, action items) via configurable LLM (opt-in — see Privacy section)
  3. Recalls relevant facts before each AI turn using FTS5 keyword search + optional semantic embeddings (BGE-M3)
  4. Respects privacy — facts are classified as shared, private, or secret based on content, with hard overrides for sensitive categories (medical, financial, credentials)
  5. Cross-agent knowledge — shared facts flow between agents with provenance tags; private/secret facts never cross boundaries

Quick Start

Install the plugin, restart your gateway, and Memento starts capturing automatically. Extraction is off by default — enable it explicitly when ready.

Optional: Semantic Search

Download a local embedding model for richer recall:

mkdir -p ~/.node-llama-cpp/models
curl -L -o ~/.node-llama-cpp/models/bge-m3-Q8_0.gguf \
  "https://huggingface.co/gpustack/bge-m3-GGUF/resolve/main/bge-m3-Q8_0.gguf"

Environment Variables

All environment variables are optional — you only need the one matching your chosen LLM provider:

VariableWhen Needed
ANTHROPIC_API_KEYUsing anthropic/* models for extraction
OPENAI_API_KEYUsing openai/* models for extraction
MISTRAL_API_KEYUsing mistral/* models for extraction
MEMENTO_API_KEYGeneric fallback for any provider
MEMENTO_WORKSPACE_MAINMigration only: path to agent workspace for bootstrapping

No API key needed for ollama/* models (local inference).

Configuration

Add to your openclaw.json under plugins.entries.memento.config:

{
  "memento": {
    "autoCapture": true,
    "extractionModel": "anthropic/claude-sonnet-4-6",
    "extraction": {
      "autoExtract": true,
      "minTurnsForExtraction": 3
    },
    "recall": {
      "autoRecall": true,
      "maxFacts": 20,
      "crossAgentRecall": true,
      "autoQueryPlanning": false
    }
  }
}

autoExtract: true is an explicit opt-in (default: false). When enabled, conversation segments are sent to the configured extractionModel for LLM-based fact extraction. Omit or set to false to keep everything local.

autoQueryPlanning: true is an explicit opt-in (default: false). When enabled, a fast LLM call runs before each recall search to expand the query with synonyms and identify relevant categories — improving precision at the cost of one extra LLM call per turn.

Data Storage

Memento stores all data locally:

PathContents
~/.engram/conversations.sqliteMain database: conversations, facts, embeddings
~/.engram/segments/*.jsonlHuman-readable conversation backups
~/.engram/migration-config.jsonOptional: migration workspace paths (only for bootstrapping)

Privacy & Data Flow

FeatureData leaves machine?Details
autoCapture (default: true)❌ NoWrites to local SQLite + JSONL only
autoExtract (default: false)⚠️ Yes, if cloud LLMSends conversation text to configured provider. Use ollama/* for local.
autoRecall (default: true)❌ NoReads from local SQLite only
Secret facts❌ NeverFiltered from extraction context — never sent to any LLM
Migration❌ NoReads local workspace files, writes to local SQLite

Migration (Bootstrap from Existing Memory Files)

Migration is an optional, one-time process to seed Memento from existing agent memory/markdown files. It is user-initiated only — never runs automatically.

What it reads

Migration reads only the files you explicitly list in the config. It does not scan your filesystem, read arbitrary files, or access anything outside the configured paths.

Setup

  1. Create ~/.engram/migration-config.json or set MEMENTO_WORKSPACE_MAIN:
{
  "agents": [
    {
      "agentId": "main",
      "workspace": "/path/to/your-workspace",
      "paths": ["MEMORY.md", "memory/*.md"]
    }
  ]
}
  1. Always dry-run first to verify exactly which files will be read:
npx tsx src/extraction/migrate.ts --all --dry-run

The dry-run prints every file path it would read — review this before proceeding.

  1. Run the actual migration:
npx tsx src/extraction/migrate.ts --all

Security notes

  • Migration only reads files matching the glob patterns you configure
  • Extracted facts inherit visibility classification (shared/private/secret)
  • Secret-classified facts are never sent to cloud LLM providers
  • Migration config file is optional — if absent, migration is completely inert
  • The migration script has no network access beyond the configured extraction LLM

Architecture

  • Capture layer — hooks message:received + message:sent, buffers multi-turn segments
  • Extraction layer — async LLM extraction with deduplication, occurrence tracking, temporal state transitions (previous_value), and knowledge graph relations (including causal edges with causal_weight)
  • Storage layer — SQLite schema v7 (better-sqlite3) with FTS5 full-text search + optional vector embeddings; knowledge graph (fact_relations with causal_weight), multi-layer clusters, and temporal transition tracking (previous_value)
  • Recall layer — optional LLM query planning pre-pass (autoQueryPlanning), multi-factor scoring (recency × frequency × category weight), 1-hop graph traversal with causal edge 1.5× boost, injected via before_prompt_build hook

Requirements

  • OpenClaw 2026.2.20+
  • Node.js 18+
  • An API key for your preferred LLM provider (for extraction — not needed if extraction is disabled or using Ollama)
  • Optional: GPU for accelerated embedding search (falls back to CPU gracefully)

Install

# From ClawHub
clawhub install memento

# Or for local development
git clone https://github.com/braibaud/Memento
cd Memento
npm install

Note: better-sqlite3 includes native bindings that compile during npm install. This is expected behavior for SQLite access.

Comments

Loading comments...