Brain Memory System

v1.0.1

Unified cognitive memory system inspired by human brain architecture. Provides episodic memory (hippocampus), semantic facts (neocortex), procedural memory w...

0· 156·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (a unified memory system) matches the files and runtime instructions: local SQLite schema, Python DB helpers, attention/consolidation/health logic, and a Bash CLI. Environment variables relate to DB paths, agent identity, and optional LLM config — all relevant to the stated functionality.
Instruction Scope
Runtime instructions (SKILL.md and scripts) operate on local files (brain.db, optional facts DB, SESSION_STATE) and run consolidation, attention, and procedure evolution. The only external network activity described is calls to a configurable LLM endpoint during `proc evolve`. The skill asserts 'no data is sent externally except LLM API calls' — this appears accurate, but those LLM calls would transmit procedure context/episodes to whatever BRAIN_LLM_URL you configure, so verify that endpoint and credentials before enabling.
Install Mechanism
There is no packaged install or remote download; the SKILL.md instructs local initialization (sqlite3 < schema.sql) and linking the provided brain.sh into ~/.local/bin. No archives or external installers are fetched by the skill itself. This is low-risk, but note the install writes a symlink into your home (~/.local/bin) and creates/uses local DB files.
Credentials
The skill requests no required platform secrets in the registry. It documents a single optional credential (BRAIN_LLM_KEY) used only for `proc evolve`, which is proportionate. Other env vars are DB paths, agent identity, and LLM URL/model — all justified. The skill provides a local fallback when no LLM key is present. Review defaults (e.g., default LLM URL) before setting keys.
Persistence & Privilege
The skill does not request 'always' or any elevated platform privileges. It writes its own config (brain.conf) and DB files in the skill directory and links a CLI into ~/.local/bin — reasonable for a CLI utility. Autonomous model invocation is allowed by platform default but is not combined with broad credentials or always:true flags.
Assessment
This package is internally consistent with its claimed purpose and appears to be a local CLI backed by SQLite. Before installing: 1) Verify the skill source/trustworthiness (it will create a symlink in ~/.local/bin and a local brain.db). 2) If you plan to use `brain proc evolve`, review scripts/evolve.py to confirm what data will be sent to the LLM and set BRAIN_LLM_URL to a trusted endpoint; do not expose sensitive secrets to a third-party LLM. 3) If you do not want any external network calls, avoid setting BRAIN_LLM_KEY and use local fallback evolution. 4) Inspect the code (evolve.py, facts.py, consolidate.py) yourself if you have sensitive data in the workspace (SESSION_STATE, facts DB) because those paths are read/writable by the skill. 5) Consider running initially in an isolated account or container and make backups of any data the skill will manage. Minor note: some SQL/text strings in the code contain odd quoting/formatting that looks like a bug rather than malicious behavior — nothing indicates covert exfiltration in the provided files.

Like a lobster shell, security has layers — review code before you run it.

latestvk978wq7a2qr36y1hxygt686tgn82zkgm

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Cognitive Brain

Unified memory system modeled on human brain architecture. One CLI (brain) for all memory operations.

Architecture

SystemBrain RegionWhat it does
EpisodicHippocampusTime-stamped experiences with emotional tags
SemanticNeocortexStructured facts (entity/key/value with FTS5)
ProceduralCerebellumVersioned workflows that evolve from failures
AttentionThalamusScore incoming info → store/summarize/discard
ConsolidationSleep replayBatch-process episodes → extract facts
HealthSoul erosionDetect memory drift, conflicts, flatness

Installation

# 1. Initialize the database
sqlite3 brain.db < scripts/schema.sql

# 2. Link the CLI
ln -sf "$(pwd)/scripts/brain.sh" ~/.local/bin/brain
chmod +x scripts/brain.sh

# 3. (Optional) Migrate existing daily logs
python3 scripts/migrate-daily-logs.py --dir /path/to/memory/ --db brain.db

Environment Variables

VariableDefaultPurpose
BRAIN_DB<skill>/brain.dbPath to brain database
BRAIN_AGENTmargotAgent identity for scoping
BRAIN_FACTS_DBmemory/facts.dbLegacy facts database path
BRAIN_LLM_URLGoogle Gemini endpointOpenAI-compatible chat completions URL
BRAIN_LLM_KEY(none — must be set)API key for LLM provider (required for proc evolve)
BRAIN_LLM_MODELgemini-2.5-flashModel name for evolution reasoning

Credentials & Scope

Required for brain proc evolve only:

  • BRAIN_LLM_KEY — Your API key for the LLM provider. Set via env var or brain config set key <value>.
  • No credentials are auto-discovered or read from platform stores.
  • Without a key, proc evolve falls back to local pattern-based evolution (no LLM needed).

Data scope:

  • All data stays in your brain.db file (local SQLite).
  • brain facts reads/writes BRAIN_FACTS_DB (default: facts.db in skill directory).
  • brain wm reads/writes SESSION_STATE (default: SESSION-STATE.md in workspace root).
  • No data is sent externally except LLM API calls during proc evolve.

Quick Reference

Store & Recall

brain store "Fixed the deploy pipeline" --title "Deploy Fix" --emotion relieved --importance 8
brain ingest "Docker OOM at 3 AM" --title "OOM Event" --source mqtt  # attention-gated
brain recall "deploy pipeline" --type all --limit 5
brain episodes 2026-03-15
brain emotions 7
brain important 8 14

Facts (Semantic Memory)

brain facts get Darian favorite_movie
brain facts set Mae birthday "September 12" --category date --permanent
brain facts search "SSH" --limit 5
brain facts list --entity Darian --limit 10
brain facts stats

Procedures (Cerebellum)

brain proc create deploy-api --title "Deploy API" --steps '["Pull latest","Run tests","Deploy"]'
brain proc success deploy-api
brain proc fail deploy-api --step 2 --error "Tests timed out" --fix "Increased timeout to 60s"
brain proc evolve deploy-api           # LLM rewrites steps from failure patterns
brain proc evolve deploy-api --dry-run # preview without applying
brain proc history deploy-api          # full evolution timeline
brain proc list

Attention Filter

brain filter "GPU temperature 72°C" --source mqtt    # → discard (routine)
brain filter "SSH brute force from new IP" --source security  # → store (novel threat)

Consolidation

brain consolidate --dry-run    # preview what would be processed
brain consolidate              # run sleep replay

Health (Soul Erosion Detection)

brain health           # 7-metric scored report
brain health -v        # verbose with all details
brain health --json    # machine-readable for crons

Configuration

brain config show              # current LLM config
brain config set model gpt-4o  # change model
brain config set url http://localhost:11434/v1/chat/completions  # switch to Ollama

Multi-Agent

brain --agent bud store "Patrol complete" --title "Bud Patrol" --importance 3
brain --agent bud proc list   # sees own + shared procedures
brain who                     # show all agents in the system

Procedure Evolution Flow

The core innovation — procedures that rewrite themselves from failure patterns:

  1. Record failures with step-level granularity: brain proc fail <slug> --step N --error "desc"
  2. At 3+ failures, brain suggests evolution
  3. brain proc evolve <slug> analyzes patterns:
    • Repeat offender steps (same step failing multiple times)
    • Brittle chains (consecutive step failures)
    • Error keyword clustering (timeout, auth, permission, etc.)
  4. LLM synthesizes and rewrites steps — adds pre-checks, reorders, annotates with [vN: reason]
  5. Local fallback if LLM unavailable — pattern-matching inserts defensive steps
  6. Full version history preserved: brain proc history <slug>

Health Metrics

Seven metrics, each scored 1-10:

MetricWhat it detects
Memory FreshnessTime since last recorded episode
Consolidation DebtBacklog of unprocessed episodes
Importance CalibrationEverything rated 8+? Nothing is important
Emotional DiversityFlatlined to one emotion = loss of range
Fact ConsistencyContradictory facts = identity fragmentation
Procedure HealthSuccess rates dropping on learned behaviors
Recording CadenceSilent days creating memory gaps

Schema

Database: SQLite with WAL mode, FTS5 full-text search, foreign keys.

Tables: episodes, episodes_fts, facts, facts_fts, procedures, procedure_history, working_memory, consolidation_log, brain_meta.

Initialize with: sqlite3 brain.db < scripts/schema.sql

Files

FilePurpose
scripts/brain.shMain CLI dispatcher
scripts/schema.sqlDatabase schema
scripts/attention.pyThalamic attention filter (rule-based scoring)
scripts/consolidate.pySleep replay consolidation pipeline
scripts/erosion.pySoul erosion health metrics
scripts/evolve.pyProcedure evolution engine (LLM + local fallback)
scripts/facts.pySemantic fact storage wrapper
scripts/migrate-daily-logs.pyImport existing daily markdown logs

Files

12 total
Select a file
Select a file to preview.

Comments

Loading comments…