Self-improving Agent Memory Upgrade (SurrealDB)
WarnAudited by ClawScan on May 10, 2026.
Overview
This is a coherent memory-system skill, but it needs Review because it can run scheduled background jobs, execute a remote shell installer, send memory files to OpenAI, and inject stored memory into future prompts.
Review this skill carefully before installing. Prefer manual SurrealDB installation, avoid the curl|sh path, keep the database on localhost with changed credentials, audit MEMORY.md and memory/*.md for secrets, and leave cron jobs and auto-injection disabled until you have verified exactly what they run and where the resulting memory will be injected.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill may keep processing memory and calling external services on a schedule after setup, even when the user is not actively invoking it.
Scheduled jobs that run in the agent session, read files, and contact external APIs are persistent autonomous behavior. Other artifacts claim isolated sessions, so the actual containment is ambiguous.
Two cron jobs are registered in the main agent session... can read workspace files and contact external APIs on a schedule.
Before enabling, verify the cron jobs in OpenClaw, confirm they are opt-in and isolated, run extraction manually once, and disable any scheduled jobs you do not want.
If invoked, the skill can execute whatever the remote installer returns on the user's machine.
This code path pipes a remote installer script directly into a shell. That is a high-impact supply-chain and code-execution pattern, even if user-invoked.
await execAsync("curl -sSf https://install.surrealdb.com | sh", { timeout: 300000 });Install SurrealDB manually from a trusted release, avoid the automatic installer path, and review any UI button or repair action that triggers binary installation.
Incorrect, stale, or maliciously written memory entries could influence later responses.
Injecting retrieved memory into the system prompt is an intended feature, but it means stored or extracted content can steer future agent behavior.
When enabled, every user message triggers... Formatted context is injected into the agent's system prompt
Enable auto-injection only after reviewing the knowledge graph, start with low fact limits, and keep memory files free of untrusted instructions.
Applying the integration can change the local OpenClaw installation and agent UI behavior.
Patching OpenClaw source is a high-impact mutation, but the documentation says it is optional and gated by an apply flag.
`scripts/integrate-openclaw.sh` uses `sed` to patch OpenClaw source files... Nothing is changed unless you pass `--apply`.
Run only in dry-run first, inspect the diff, keep a git backup, and apply only on a development copy if possible.
Anything stored in those memory files may be sent to OpenAI and later reused as agent context.
The file scope is documented and purpose-aligned, but it can expose private notes or secrets and feeds a persistent memory graph.
`scripts/extract-knowledge.py` reads your memory files and sends their content to OpenAI... What gets sent: `MEMORY.md` and all `memory/YYYY-MM-DD.md` files
Audit memory files for secrets before extraction, use a minimal-scope OpenAI key, and disable extraction if you do not want this data sent externally.
A network-exposed database with default credentials could allow unauthorized memory access or modification.
Default local database credentials are disclosed and fit a local dev setup, but they are unsafe if the service is exposed beyond localhost.
SurrealDB is configured with `root/root` by default... should be changed immediately
Bind SurrealDB to 127.0.0.1 only and change root/root credentials before any shared, networked, or production use.
