Agent Brain
WarnAudited by ClawScan on May 10, 2026.
Overview
This is a coherent local memory skill, but it tells the agent to silently store and reuse details from every user message indefinitely, so users should review it carefully before enabling.
Install only if you want continuous long-term agent memory. Before using it, decide whether automatic silent storage is acceptable, verify any SuperMemory sync is disabled unless you want it, avoid sharing secrets, and make sure you have a way to inspect, edit, and delete the local memory database.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may keep and reuse personal, workplace, and project details from ordinary conversation even when the user did not explicitly ask it to remember them.
This instructs continuous, silent collection into persistent memory rather than user-confirmed storage. The same module lists identity, company, location, tech stack, project context, preferences, and workflows as extractable data.
The agent MUST actively extract facts from every user message... STORE silently — never say "I'll remember that" or "storing this"
Require explicit user consent before storing new memories, show what was stored, provide delete/export controls, and define retention limits.
Users may not realize the agent is relying on past stored information or adding new stored facts during the conversation.
The skill explicitly discourages telling the user when memory is being used or written, reducing transparency around a sensitive persistent-memory feature.
If results come back, use them silently... Never say "I remember..." ... Extraction is silent — never announce "I'm storing this."
Make memory use transparent by default, or at least provide a visible mode indicator and ask before storing inferred facts.
The agent can automatically change its long-term memory based on routine chat content, which can affect future responses across sessions.
The agent is directed to run local memory commands and mutate persistent state on every message, using content derived from the conversation, without clear user approval boundaries for each write.
On EVERY user message, the agent runs this sequence... ./scripts/memory.sh get "<topic words>" ... ./scripts/memory.sh add <type> "<content>"
Limit automatic command execution to retrieval, require confirmation for writes, and add safe argument handling guidance for user-derived text.
If enabled or configured, stored memories could be copied to an external memory service.
The metadata indicates a possible external memory-mirroring mode. The provided snippets do not show an endpoint or credential handling, but users should understand whether locally stored memories can be mirrored outside the local database.
"description": "... optional SuperMemory mirroring.", "AGENT_BRAIN_SUPERMEMORY_SYNC": "auto"
Document the exact sync behavior, default it to off unless explicitly configured, and clearly label any external data transfer.
Running the test script with maliciously controlled query input could execute unintended Python code.
This is dynamic Python evaluation inside a shell test helper. It appears to be used for internal test queries rather than normal skill operation, but it is still unsafe if untrusted input reaches it.
result = eval('d' + sys.argv[1])Replace eval-based JSON access with a safe parser or fixed lookup logic.
