Total Recall
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: total-recall Version: 1.5.1 The skill is classified as suspicious due to its extensive use of high-risk capabilities, including direct shell command execution (`exec: bash`, `Run bash`) via agent instructions in `config/memory-flush.json`, `templates/AGENTS-snippet.md`, and `config/cron-job.json`. It also installs a systemd service (`scripts/setup.sh`) and performs `git reset --hard` (`scripts/dream-cycle.sh`). While these actions are integral to the skill's stated purpose of autonomous memory management and include explicit defenses against prompt injection (e.g., path traversal protection in `scripts/dream-cycle.sh` and `scripts/staging-review.sh`, and instructions to the agent not to write to sensitive files in `prompts/dream-cycle-prompt.md`), the inherent power of these operations and the potential for a sophisticated prompt injection or a compromised LLM to bypass defenses elevate the risk beyond benign. There is no evidence of intentional malicious behavior such as data exfiltration to unauthorized endpoints or stealthy backdoors.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private conversations, secrets, business details, or personal information discussed with the agent may be sent to the configured LLM provider on a recurring basis.
The skill automatically processes raw conversation transcripts through an LLM provider, with OpenRouter as the default endpoint. The artifacts do not show redaction or per-run approval for sensitive transcript content.
Observer reads recent session transcripts (JSONL), sends them to an LLM, and appends compressed observations to `observations.md` ... `LLM_BASE_URL` | `https://openrouter.ai/api/v1`
Use a local/private LLM endpoint for sensitive work, add redaction and exclusion controls, document provider retention, and require explicit opt-in before enabling automatic transcript processing.
A mistaken or maliciously induced memory could cause the agent to remember false preferences, rules, or facts across sessions, and private information may remain in persistent files.
Auto-generated observations are persisted and then loaded into future agent context. If the observer records incorrect, sensitive, or adversarial instructions as memory, they can influence later sessions.
Observer reads recent session transcripts ... appends compressed observations to `observations.md` ... `At session startup, read memory/observations.md for cross-session context.`
Treat generated memories as untrusted until reviewed, keep them in a clearly user-editable file, avoid loading them as high-priority system instructions, and provide easy audit/delete controls.
The skill can continue watching session files, invoking LLM calls, consuming provider credits, and writing memory even when the user is not actively running it.
The skill is designed to keep operating through cron, a daemon-style watcher, and compaction hooks after setup, not just when manually invoked.
Layer 1: Observer (cron, every 15 min) ... Layer 4: Reactive Watcher (inotify daemon, Linux only) ... Layer 5: Pre-compaction hook
Make each background trigger explicitly opt-in, provide clear stop/uninstall commands, log every run, and let users disable watcher, cron, and compaction hooks independently.
A bad classification or LLM error could alter the memory file that future sessions rely on, spreading stale or wrong context across later interactions.
The Dream Cycle prompt defaults to write mode, allowing automated archival and updates to persistent memory unless a safer mode is explicitly set.
`READ_ONLY_MODE=false` -> full write mode ... Default assumption if not specified externally: `READ_ONLY_MODE=false`.
Default Dream Cycle to read-only, require human approval for live writes, and keep backups plus a simple rollback path visible to users.
During compaction, the agent may prioritize memory preservation actions over ordinary interaction and may write notes without a visible response.
The compaction hook deliberately gives the agent forceful instructions to run a command, write memory, and suppress a normal reply. This is purpose-aligned but should remain tightly scoped.
IMPORTANT: Context is nearing compaction. You MUST preserve important information ... exec: bash ~/your-workspace/skills/total-recall/scripts/observer-agent.sh --flush ... Reply with NO_REPLY after writing.
Keep this instruction only in the compaction hook, avoid adding it to the global system prompt, and make the hook’s writes auditable.
Anyone with access to the environment or cron configuration could potentially use or leak the LLM provider key.
The skill needs provider credentials for its stated LLM summarization purpose. The artifacts do not show hardcoded keys or unrelated credential use, but users must protect the key.
`OPENROUTER_API_KEY` | (required) | OpenRouter API key for LLM calls ... `LLM_API_KEY` | falls back to `OPENROUTER_API_KEY`
Use a scoped/low-limit provider key, store it outside shared files, prefer a local endpoint for private data, and rotate the key if logs or configs are exposed.
