Memory Dreaming

WarnAudited by ClawScan on May 10, 2026.

Overview

Memory Dreaming is a coherent memory tool, but it deserves review because it can persist secrets and full chat histories and can send chat text to external LLM providers.

Review this skill carefully before installing. It is not clearly malicious, but you should avoid putting passwords or API keys into MEMORY.md, limit conversation archiving to specific channels, configure exclusions and retention, and avoid silent cron jobs or external summarisation until you are comfortable with what data will be stored or sent.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Passwords, API keys, or other account secrets could remain in agent-readable memory files indefinitely and be exposed through backups, future prompts, archives, or other tooling.

Why it was flagged

The skill tells the agent to integrate credentials and connection details into long-term memory. That means secrets may be stored in plain Markdown/JSON files and reused across future sessions.

Skill content
Integrate if: ... "It's a credential, connection detail, or configuration that isn't already captured (\"New API key for service Y\")"
Recommendation

Do not store raw credentials in MEMORY.md or daily notes. Store references to a password manager or vault entry instead, and add explicit no-secrets rules, redaction, encryption, or access controls before use.

What this means

Private conversations from multiple channels may be copied into long-lived archive files and later reused as agent context.

Why it was flagged

The documented workflow can archive all available conversations into persistent local archives. The skill purpose explains this, but the scope is broad and may include private or sensitive chats.

Skill content
node scripts/conversation-archive.js --all           # archive everything
Recommendation

Run discovery first, archive only selected channels/groups, configure excludePatterns, and define a retention/deletion policy before using --all or scheduling archiving.

What this means

Archived conversations may be processed under the external provider's privacy, retention, and billing terms; regex redaction may not catch every sensitive item.

Why it was flagged

The external data flow is clearly disclosed and purpose-aligned, but it moves chat transcript content outside the local workspace.

Skill content
Conversation summariser sends conversation text to an external LLM API (OpenRouter or OpenAI) for summarisation. This means: Your chat transcripts are sent to a third-party API
Recommendation

Use a self-hosted/local model for sensitive chats, configure extra exclusions, and get consent from affected users before summarising private conversations through third-party APIs.

What this means

If installed, memory decay, pruning, archiving, and summarisation can continue while the user is inactive and without routine chat notifications.

Why it was flagged

The cron templates are optional and user-installed, but they are designed for unattended background memory maintenance.

Skill content
`delivery: none` keeps it silent — dream cycles are background work
Recommendation

Install scheduled jobs only after reviewing their scope, keep audit logs enabled, prefer visible summaries for early runs, and disable cron jobs when no longer needed.

What this means

A configured provider key can authorize API usage and billing for transcript summarisation.

Why it was flagged

The summariser reads OpenRouter/OpenAI API keys from environment variables or .env files to call the configured LLM provider. This is expected for the feature, though the registry metadata lists no required credentials.

Skill content
return process.env.OPENROUTER_API_KEY || process.env.OPENAI_API_KEY;
Recommendation

Use a dedicated, least-privileged API key with spending limits, store it securely, and remove it when external summarisation is not needed.