Automated daily memory backfill for OpenClaw sessions
PassAudited by ClawScan on May 10, 2026.
Overview
This skill is transparent about processing OpenClaw session logs into memory files, but users should notice that it handles sensitive chat history, can call LLM providers, and can be scheduled to run repeatedly.
Install only if you want OpenClaw session history used to rebuild persistent memory. Start with compare or a small --today run, review the generated memory files, avoid unattended cron until you trust the results, and use LLM summarization only with a backend whose privacy and retention terms you accept.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Conversation history, tool outputs, and summaries may become durable memory that influences future agent behavior.
The code defaults show the tool reads OpenClaw session logs and writes persistent OpenClaw memory files.
DEFAULT_SESSIONS_DIR = Path.home() / '.openclaw' / 'agents' / 'main' / 'sessions' DEFAULT_MEMORY_DIR = Path.home() / '.openclaw' / 'workspace' / 'memory'
Review generated memory files before relying on them, prefer smaller scopes such as --today or --incremental first, and delete or edit memory entries that should not persist.
Even with secret sanitization, private session context may be processed by the selected model/provider.
LLM summarization can send existing memory/session-derived content to the configured OpenClaw model or optional direct provider backend.
With `--preserve`: Existing content is **passed to the LLM** with instructions to incorporate it into the new summary
Use simple extraction when you do not want LLM processing, and use --summarize only with providers and retention policies you trust.
If direct backends are used, the tool can consume provider quota and submit summary requests under the user's API account.
Optional direct summarization modes use provider API keys, which is expected for those backends but gives the tool delegated access to those accounts.
`anthropic` | Direct Anthropic API via openai package | `ANTHROPIC_API_KEY` `openai` | Direct OpenAI API | `OPENAI_API_KEY`
Use provider keys only when needed, keep them out of logs, monitor usage, and prefer the default OpenClaw backend if that better matches your privacy expectations.
If scheduled, the tool may keep updating memory after installation without a fresh manual command each time.
The skill documents recurring automation, but the provided artifacts show this as a user-configured workflow rather than hidden self-persistence.
Automated daily memory sync via cron/heartbeat
Only add cron or heartbeat automation intentionally, keep logs, and remove the scheduled job if you no longer want automatic memory updates.
A future dependency version change could alter behavior or introduce dependency risk.
The setup instructions install unpinned PyPI packages. This is normal for a Python CLI, but package versions and provenance are not locked by the artifact.
pip install click # Optional: for direct API summarization ... pip install openai
Install in a virtual environment and consider pinning package versions if using this in a sensitive workflow.
