Brain CMS

ReviewAudited by ClawScan on May 10, 2026.

Overview

Brain CMS matches its local memory purpose, but it persistently changes future-agent instructions and consolidates/indexes private memory in ways users should review first.

Install only if you want this skill to change your workspace memory architecture. Back up AGENTS.md and your memory folder first, review the REM/NREM changes before accepting them, consider excluding sensitive anchors or logs from indexing, and make sure you are comfortable with the Python package and Ollama model downloads.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Future agent sessions may follow Brain CMS boot, search, and sleep-cycle routines because the workspace instruction file was changed, even after the initial setup task is over.

Why it was flagged

The installer persists Brain CMS operating instructions into AGENTS.md, changing future agent behavior. SKILL.md's install overview lists memory/ and memory_brain/ but not this workspace instruction-file modification.

Skill content
agents_path = WORKSPACE / "AGENTS.md" ... agents_path.write_text(content + cms_note) ... "**Sleep:** NREM on shutdown"
Recommendation

Clearly disclose the AGENTS.md modification, ask for explicit confirmation before changing it, back up the original file, and provide an uninstall or restore command.

What this means

A misleading log entry, prompt-injection text, or LLM hallucination could become persistent memory and influence later agent work.

Why it was flagged

The REM cycle reads weekly logs, asks a local LLM to extract facts, appends the generated facts to persistent schema files, and then reindexes them, without a human review step in the code.

Skill content
weekly_logs = load_weekly_logs()
updates = extract_updates(weekly_logs, list(schemas.keys()))
...
with open(schema_path, "a", encoding="utf-8") as f: f.write(section)
Recommendation

Require review of proposed REM updates before writing them, store a diff or backup, and treat daily logs as untrusted input when generating persistent memory.

What this means

High-significance anchor memories may be indexed and retrieved more broadly than the user expects.

Why it was flagged

The comment says ANCHORS should be excluded, but the actual code only skips daily logs and INDEX.md, so ANCHORS.md can be embedded into the vector store and reused in semantic search.

Skill content
# Auto-detect schema files (all .md files in memory/ except daily logs, INDEX, ANCHORS)
...
if name[0].isdigit() or name in ("INDEX.md",):
    continue
files.append(rel)
Recommendation

Either exclude ANCHORS.md by default or explicitly disclose that anchors are indexed, and add user-configurable include/exclude rules.

What this means

Users may not realize installation will download packages/models and rely on local tooling before reading SKILL.md closely.

Why it was flagged

The skill documents shell-based setup and downloads Python packages and Ollama models, while the supplied registry metadata says there is no install spec and no required binaries. The behavior is expected for the purpose but under-declared.

Skill content
requires:
  bins: ["python3", "ollama"]
install:
  - kind: shell ... pip install lancedb numpy pyarrow requests --quiet
  - kind: shell ... ollama pull nomic-embed-text && ollama pull llama3.2:3b
Recommendation

Declare the required binaries and install steps in registry metadata, pin dependency versions or hashes where practical, and make the install path explicit.