Memento
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: memento Version: 0.6.0 The OpenClaw Memento plugin is designed for local, privacy-first memory management, but exhibits 'suspicious' characteristics due to its broad local filesystem scanning and inherent risks of sending data to external LLMs. The `src/cli/deep-consolidate.ts` script broadly scans `homedir()` for agent databases in `~/.openclaw/workspace-*/` and `~/*/` (non-dot directories), which, while intended for legitimate discovery, represents a wide scope of local file access. Additionally, while transparent and opt-in, the plugin's core functionality in `src/extraction/extractor.ts` and `src/consolidation/relation-sweep.ts` involves sending conversation segments or fact summaries to external cloud LLM providers (e.g., Anthropic, OpenAI) if configured, which inherently carries data exfiltration risks, even with explicit privacy safeguards like `secret` fact filtering in `src/extraction/classifier.ts` and `relation-sweep.ts`. These capabilities, while plausibly aligned with the stated purpose, introduce high-risk behaviors that warrant a 'suspicious' classification rather than 'benign' due to the potential for misuse or unintended data exposure.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Past conversations may become persistent context for future agent responses, including across sessions and agents depending on visibility settings.
The plugin persistently stores conversation-derived memory and reuses it in future prompts, which is expected for this skill but can influence later agent behavior if incorrect or sensitive facts are captured.
Captures every conversation turn... auto-recalls relevant knowledge before each AI turn
Review the stored memory, disable autoCapture or autoRecall if unwanted, and periodically delete or correct facts that should not affect future work.
If a cloud provider is configured, conversation text used for extraction may leave the local machine.
The skill clearly discloses that enabling extraction can transmit conversation text to a configured model provider; this is purpose-aligned but sensitive.
When `autoExtract` is enabled, conversation segments are sent to your configured LLM provider for fact extraction.
Use a local Ollama model for fully local operation, or only enable extraction with a provider whose data handling you trust.
Configured provider credentials may be used to run extraction or query-planning calls.
The plugin may use provider API keys or OpenClaw routing credentials for LLM extraction. This is expected for provider integration, but users should know which credentials are available to the skill.
optionalEnv: ... ANTHROPIC_API_KEY ... OPENAI_API_KEY ... MISTRAL_API_KEY ... CLAUDE_CODE_OAUTH_TOKEN
Prefer scoped provider keys, avoid enabling unused providers, and verify that extraction is disabled when cloud use is not desired.
Memory consolidation or relation-building could run in the background and alter stored facts or graph relationships.
The documentation describes scheduled/background memory maintenance. It appears purpose-aligned, but it means the plugin may continue processing stored memory outside an active conversation if scheduled.
Deep Sleep │ cron (3 AM) ──► deepConsolidate ──► decay + merge + refresh
Check whether any cron or scheduler integration is enabled, and disable scheduled consolidation if you only want manual memory processing.
