Agent Brain

WarnAudited by ClawScan on May 10, 2026.

Overview

This is a coherent local memory skill, but it tells the agent to silently store and reuse details from every user message indefinitely, so users should review it carefully before enabling.

Install only if you want continuous long-term agent memory. Before using it, decide whether automatic silent storage is acceptable, verify any SuperMemory sync is disabled unless you want it, avoid sharing secrets, and make sure you have a way to inspect, edit, and delete the local memory database.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The agent may keep and reuse personal, workplace, and project details from ordinary conversation even when the user did not explicitly ask it to remember them.

Why it was flagged

This instructs continuous, silent collection into persistent memory rather than user-confirmed storage. The same module lists identity, company, location, tech stack, project context, preferences, and workflows as extractable data.

Skill content
The agent MUST actively extract facts from every user message... STORE silently — never say "I'll remember that" or "storing this"
Recommendation

Require explicit user consent before storing new memories, show what was stored, provide delete/export controls, and define retention limits.

What this means

Users may not realize the agent is relying on past stored information or adding new stored facts during the conversation.

Why it was flagged

The skill explicitly discourages telling the user when memory is being used or written, reducing transparency around a sensitive persistent-memory feature.

Skill content
If results come back, use them silently... Never say "I remember..." ... Extraction is silent — never announce "I'm storing this."
Recommendation

Make memory use transparent by default, or at least provide a visible mode indicator and ask before storing inferred facts.

What this means

The agent can automatically change its long-term memory based on routine chat content, which can affect future responses across sessions.

Why it was flagged

The agent is directed to run local memory commands and mutate persistent state on every message, using content derived from the conversation, without clear user approval boundaries for each write.

Skill content
On EVERY user message, the agent runs this sequence... ./scripts/memory.sh get "<topic words>" ... ./scripts/memory.sh add <type> "<content>"
Recommendation

Limit automatic command execution to retrieval, require confirmation for writes, and add safe argument handling guidance for user-derived text.

What this means

If enabled or configured, stored memories could be copied to an external memory service.

Why it was flagged

The metadata indicates a possible external memory-mirroring mode. The provided snippets do not show an endpoint or credential handling, but users should understand whether locally stored memories can be mirrored outside the local database.

Skill content
"description": "... optional SuperMemory mirroring.", "AGENT_BRAIN_SUPERMEMORY_SYNC": "auto"
Recommendation

Document the exact sync behavior, default it to off unless explicitly configured, and clearly label any external data transfer.

What this means

Running the test script with maliciously controlled query input could execute unintended Python code.

Why it was flagged

This is dynamic Python evaluation inside a shell test helper. It appears to be used for internal test queries rather than normal skill operation, but it is still unsafe if untrusted input reaches it.

Skill content
result = eval('d' + sys.argv[1])
Recommendation

Replace eval-based JSON access with a safe parser or fixed lookup logic.