Code Pluginsource linked

KongBrainv0.4.4

Graph-backed persistent memory engine for OpenClaw. Replaces the default context window with SurrealDB + vector embeddings that learn across sessions.

kongbrain·runtime kongbrain·by @42u
Community code plugin. Review compatibility and verification before install.
openclaw plugins install clawhub:kongbrain
Latest release: v0.4.4Download zip

Capabilities

configSchema
Yes
Executes code
Yes
HTTP routes
0
Plugin kind
context-engine
Runtime ID
kongbrain

Compatibility

Built With Open Claw Version
2026.3.23
Min Gateway Version
2026.3.23
Plugin Api Range
>=2026.3.23
Plugin Sdk Version
2026.3.23
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The plugin's stated purpose (graph-backed persistent memory using SurrealDB and embeddings) matches the code and declared requirements in SKILL.md/openclaw.plugin.json (SURREAL_* env vars, 'surreal' binary, embedding model). However the registry summary at the top of the package listing incorrectly reports 'no required env vars' and 'no required binaries' — that metadata mismatch is confusing and should be resolved before trusting automatic installation checks.
!
Instruction Scope
The runtime instructions and code show the plugin participates in system-prompt assembly (hooks like before-prompt-build), persistently stores every turn and tool results in a graph DB, and documents that it may include git diffs and instruction files in the system prompt with no sane size limit. That behavior is plausible for a context engine, but it increases the risk that local secrets, large files, or sensitive repo diffs will be sent to whichever LLM provider is configured. The SKILL.md also contains a detected 'system-prompt-override' pattern and the codebase includes hooks that can inject directives into prompts — this is a legitimate feature for adaptive reasoning but is also a high-value attack surface for prompt-injection or unintended data leakage.
Install Mechanism
The skill is instruction-heavy but is distributed as an npm package (package.json present) and relies on dependencies including node-llama-cpp and surrealdb. There is no dangerous arbitrary-URL download install, but the embedding model (BGE-M3) is automatically downloaded from Hugging Face on first run — this network download and a ~420MB model placed locally is expected but worth noting. No postinstall scripts appear in package.json, lowering install-time risk; still, the plugin will write files to ~/.kongbrain and create DB tables in SurrealDB.
Credentials
Requested environment variables (SURREAL_URL, SURREAL_USER, SURREAL_PASS, SURREAL_NS, SURREAL_DB and optional embedding model path) are proportional for a plugin that runs a SurrealDB-backed memory store and local embedding model. The mismatch between the top-level registry listing (which claimed 'none') and the SKILL.md/openclaw.plugin.json listing of SURREAL_* vars is concerning because automated permission checks may be bypassed or misleading. The plugin will also create and read files in the user's home (~/.kongbrain) which is reasonable for persisted weights and model artifacts but should be expected by the user.
!
Persistence & Privilege
always:false (good), and the plugin writes only to its own files and to the configured SurrealDB. However it gains elevated blast radius by: (a) registering hooks that can modify system prompts and include arbitrary local content in prompts; (b) storing verbatim tool outputs and other unbounded content persistently; (c) downloading/creating models and weights under ~/.kongbrain. These behaviors are coherent with a context engine but combine to increase risk if the plugin or its config is misused.
Scan Findings in Context
[system-prompt-override] expected: The plugin intentionally modifies and augments the system prompt to implement compaction and 'cognitive checks'. That is expected for a context engine but the pattern is a sensitive capability: it can be used to inject persistent directives or accidentally surface local secrets into LLM API calls. Treat this as a high-risk feature to audit rather than a simple false positive.
What to consider before installing
KongBrain is plausibly what it claims (a SurrealDB + embedding-based context engine) but it also has several high-impact behaviors you should review before installing: - Metadata mismatch: the published registry summary claims no env/binary requirements while SKILL.md and openclaw.plugin.json require SurrealDB and SURREAL_* credentials. Do not trust the top-level registry metadata; verify required env vars before installing. - Local DB and persistent storage: it writes data to SurrealDB and to ~/.kongbrain (weights, handoff files, model artifacts). Expect persistent copies of all conversation turns, tool outputs, and extracted 'memories'. If you have sensitive data in conversations or allow tools that read files, that data will be stored unless you change configuration. - System-prompt and data-inclusion risk: the plugin participates in system-prompt assembly and can include git diffs, instruction files, and tool outputs (some unbounded in size) in prompts sent to your LLM provider. This can leak repository contents or secrets to whichever remote LLM you use. If you use a remote cloud LLM, prefer running SurrealDB and model downloads locally and audit what gets included in prompts. - Model download/network: the embedding model auto-downloads from Hugging Face on first run — if you want to avoid network fetches, pre-provision the model and set the embedding.modelPath config. - Audit points before installing: (1) review hooks (src/hooks/) and any code that reads files or runs git to understand what gets included in prompts; (2) run the plugin in an isolated/sandboxed environment first (e.g., dedicated VM or container) with a local SurrealDB bound to 127.0.0.1; (3) change default DB credentials and do not expose SurrealDB publicly; (4) consider disabling or limiting persistent storage of raw tool outputs or large git diffs via config if possible; (5) if you need high assurance, review the code (it is included) focusing on before-prompt-build, before-tool-call, and hooks that write to DB. If you aren't comfortable auditing the code or cannot run it in an isolated profile, treat this plugin as risky to install on an environment with sensitive code or secrets.

Verification

Tier
source linked
Scope
artifact only
Summary
Validated package structure and linked the release to source metadata.
Commit
05cc779c927f
Tag
v0.4.4
Provenance
No
Scan status
pending

Tags

latest
0.4.4
<div align="center">

KongBrain

KongBrain

npm ClawHub GitHub Stars License: MIT Node.js SurrealDB OpenClaw Tests

A graph-backed cognitive engine for OpenClaw.

Quick Start | Architecture | How It Works | Tools | Development

</div>

OpenClaw ships with a lobster brain. It works. Lobsters have survived 350 million years, but they also solve problems by walking backwards and occasionally eating each other.

When a conversation gets too long, the lobster brain does what lobsters do best: it panics, truncates everything before message 47, and carries on like nothing happened. Your carefully explained architecture? Gone. That bug you described in detail twenty minutes ago? Never heard of it.

KongBrain is a brain transplant. You're replacing that crustacean context window with a primate cortex backed by a graph database, vector embeddings, and the kind of persistent memory that lets your AI remember what you said last Tuesday. And judge you for it.

Apes remember. Apes use tools. Apes hold grudges about your code style and learn from them. Lobsters eat garbage off the ocean floor and forget about it immediately.

The surgery takes about 2 minutes. No anesthesia required.

Persistent memory graph. Vector-embedded, self-scoring, wired to learn across sessions. It extracts skills from what worked, traces causal chains through what broke, reflects on its own failures, and earns an identity through real experience. Every session compounds on the last.

Your assistant stops forgetting. Then it starts getting smarter.


What Changes

Lobster Brain (default)Ape Brain (KongBrain)
MemorySliding window. Old messages fall off a cliff.Graph-persistent. Every turn, concept, skill, and causal chain stored with vector embeddings.
RecallWhatever fits in the context window right now.Cosine similarity + graph expansion + learned attention scoring across your entire history.
AdaptationSame retrieval budget every turn, regardless of intent.10 intent categories. Simple question? Minimal retrieval. Complex debugging? Full graph search + elevated thinking.
LearningNone. Every session starts from zero.Skills extracted from successful workflows, causal chains graduated into reusable procedures, corrections remembered permanently.
Self-awarenessThermostat-level.Periodic cognitive checks grade its own retrieval quality, detect contradictions, suppress noise, and extract your preferences. Eventually graduates a soul document.
CompactionLLM-summarizes your conversation mid-flow (disruptive).Graph retrieval IS the compaction. No interruptions, no lossy summaries.

Quick Start

From zero to ape brain in under 5 minutes.

1. Install OpenClaw (if you haven't already)

npm install -g openclaw

2. Start SurrealDB

Install SurrealDB via your platform's package manager (see surrealdb.com/install):

macOS:

brew install surrealdb/tap/surreal

Linux — see https://surrealdb.com/docs/surrealdb/installation for your distro.

Then start it locally, change the credentials before use:

surreal start --user youruser --pass yourpass --bind 127.0.0.1:8042 surrealkv:~/.kongbrain/surreal.db

Or with Docker:

docker run -d --name surrealdb -p 127.0.0.1:8042:8000 \
  -v ~/.kongbrain/surreal-data:/data \
  surrealdb/surrealdb:latest start \
  --user youruser --pass yourpass surrealkv:/data/surreal.db

Security note: Always bind to 127.0.0.1 (not 0.0.0.0) unless you need remote access. Never use default credentials in production.

3. Install KongBrain

From ClawHub (recommended):

openclaw plugins install clawhub:kongbrain

From npm:

openclaw plugins install kongbrain

Note: Bare openclaw plugins install kongbrain checks ClawHub first, then falls back to npm. Use the clawhub: prefix to install from ClawHub explicitly.

4. Activate

Add to your OpenClaw config (~/.openclaw/openclaw.json):

{
  "plugins": {
    "allow": ["kongbrain"],
    "slots": {
      "contextEngine": "kongbrain"
    }
  }
}

5. Talk to your ape

openclaw tui

That's it. KongBrain uses whatever LLM provider and model you already have configured in OpenClaw (Anthropic, OpenAI, Google, Ollama, whatever). No separate API keys needed for the brain itself.

The BGE-M3 embedding model (~420MB) downloads automatically on first startup from Hugging Face. All database tables and indexes are created automatically on first run. No manual setup required.

<details> <summary><strong>Configuration Options</strong></summary>

All options have sensible defaults. Override via plugin config or environment variables:

OptionEnv VarDefault
surreal.urlSURREAL_URLws://127.0.0.1:8042/rpc
surreal.userSURREAL_USER(required)
surreal.passSURREAL_PASS(required)
surreal.nsSURREAL_NSkong
surreal.dbSURREAL_DBmemory
embedding.modelPathKONGBRAIN_EMBEDDING_MODELAuto-downloaded BGE-M3 Q4_K_M
embedding.dimensions-1024

Full config example:

{
  "plugins": {
    "allow": ["kongbrain"],
    "slots": {
      "contextEngine": "kongbrain"
    },
    "entries": {
      "kongbrain": {
        "config": {
          "surreal": {
            "url": "ws://127.0.0.1:8042/rpc",
            "user": "youruser",
            "pass": "yourpass",
            "ns": "kong",
            "db": "memory"
          }
        }
      }
    }
  }
}
</details>

Architecture

The IKONG Pillars

KongBrain's cognitive architecture follows five functional pillars:

PillarRoleWhat it does
IntelligenceAdaptive reasoningIntent classification, complexity estimation, thinking depth, orchestrator preflight
KnowledgePersistent memoryMemory graph, concepts, skills, reflections, identity chunks, core memory tiers
OperationExecutionTool orchestration, skill procedures, causal chain tracking, artifact management
NetworkGraph traversalCross-pillar edge following, neighbor expansion, causal path walking
GraphPersistenceSurrealDB storage, BGE-M3 vector search, HNSW indexes, embedding pipeline

A 6th pillar, Persona, is unlocked at soul graduation: "You have a Soul, an identity grounded in real experience. Be unique, be genuine, be yourself."

Structural Pillars

The graph entity model in SurrealDB:

PillarTableWhat it anchors
1. AgentagentWho is operating (name, model)
2. ProjectprojectWhat we're working on (status, tags)
3. TasktaskIndividual sessions as units of work
4. ArtifactartifactFiles and outputs tracked across sessions
5. ConceptconceptSemantic knowledge nodes extracted from sessions

On startup, the agent bootstraps the full chain: Agent → owns → Project, Agent → performed → Task, Task → task_part_of → Project, Session → session_task → Task. Graph expansion traverses these edges during retrieval.

The Knowledge Graph

SurrealDB with HNSW vector indexes (1024-dim cosine). Everything is embedded and queryable.

TableWhat it stores
turnEvery conversation message (role, text, embedding, token count, model, usage)
memoryCompacted episodic knowledge (importance 0-10, confidence, access tracking)
skillLearned procedures with steps, preconditions, success/failure counts
reflectionMetacognitive lessons (efficiency, failure patterns, approach strategy)
causal_chainCause-effect patterns (trigger, outcome, chain type, success, confidence)
identity_chunkAgent self-knowledge fragments (source, importance, embedding)
monologueThinking traces preserved across sessions
core_memoryTier 0 (always loaded) + Tier 1 (session-pinned) directives
soulEmergent identity document, earned through graduation
<details> <summary><strong>Adaptive Reasoning</strong>, per-turn intent classification and budget allocation</summary>

Every turn gets classified by intent and assigned an adaptive config:

IntentThinkingTool LimitToken BudgetRetrieval Share
simple-questionlow34K10%
code-readmedium56K15%
code-writehigh88K20%
code-debughigh108K20%
deep-exploremedium156K15%
reference-priormedium510K25%
meta-sessionlow23K7% (skip retrieval)
multi-stephigh128K20%
continuationlow84Kskip retrieval

Fast path: Short inputs (<20 chars, no ?) skip classification entirely. Confidence gate: Below 0.40 confidence, falls back to conservative config.

</details> <details> <summary><strong>Context Injection Pipeline</strong></summary>
  1. Embed user input via BGE-M3 (or hit prefetch cache at 0.85 cosine threshold)
  2. Vector search across 6 tables (turn, identity_chunk, concept, memory, artifact, monologue)
  3. Graph expand: fetch neighbors via structural + semantic edges, compute cosine similarity
  4. Score all candidates with WMR (Working Memory Ranker):
    score = W * [similarity, recency, importance, access, neighbor_bonus, utility, reflection_boost]
    
  5. Budget trim: inject Tier 0/1 core memory first (15% of context), then ranked results up to 21% retrieval budget
  6. Stage retrieval snapshot for post-hoc quality evaluation
</details> <details> <summary><strong>ACAN</strong>, learned cross-attention scorer</summary>

A ~130K-parameter cross-attention network that replaces the fixed WMR weights once enough data accumulates.

  • Activation: 5,000+ labeled retrieval outcomes
  • Training: Pure TypeScript SGD with manual backprop, 80 epochs
  • Staleness: Retrains when data grows 50%+ or weights age > 7 days
</details> <details> <summary><strong>Soul & Graduation</strong>, earned identity, not assigned</summary>

The agent earns an identity document through accumulated experience. Graduation requires all 7 thresholds met AND a quality score >= 0.6:

SignalThreshold
Sessions completed15
Reflections stored10
Causal chains traced5
Concepts extracted30
Memory compactions5
Monologue traces5
Time span3 days

Quality scoring from 4 real performance signals: retrieval utilization (30%), skill success rate (25%), critical reflection rate (25%), tool failure rate (20%).

Maturity stages: nascent (0-3/7) → developing (4/7) → emerging (5/7) → maturing (6/7) → ready (7/7 + quality gate). The agent and user are notified at each stage transition.

Soul evolution: Every 10 sessions after graduation, the soul is re-evaluated against new experience and revised if the agent has meaningfully changed.

Soul document structure: Working style, self-observations, earned values (grounded in specific evidence), revision history. Seeded as Tier 0 core memory, loaded every single turn.

</details> <details> <summary><strong>Reflection System</strong>, metacognitive self-correction</summary>

Triggers at session end when metrics indicate problems:

ConditionThreshold
Retrieval utilization< 20% average
Tool failure rate> 20%
Steering candidatesany detected
Context waste> 0.5% of context window

The LLM generates a 2-4 sentence reflection: root cause, error pattern, what to do differently. Stored with importance 7.0, deduped at 0.85 cosine similarity.

</details>

How It Works

Every Turn

User Input
    |
    v
Preflight ──────── Intent classification (25ms, zero-shot BGE-M3 cosine)
    |                  10 categories: simple-question, code-read, code-write,
    |                  code-debug, deep-explore, reference-prior, meta-session,
    |                  multi-step, continuation, unknown
    v
Prefetch ────────── Predictive background vector searches (LRU cache, 5-min TTL)
    |
    v
Context Injection ─ Vector search -> graph expand -> 6-signal scoring -> budget trim
    |                  Searches: turns, concepts, memories, artifacts, identity, monologues
    |                  Scores: similarity, recency, importance, access, neighbor, utility
    |                  Budget: 21% of context window reserved for retrieval
    v
Agent Loop ──────── LLM + tool execution
    |                  Planning gate: announces plan before touching tools
    |                  Smart truncation: preserves tail of large tool outputs
    v
Turn Storage ────── Every message embedded + stored + linked via graph edges
    |                  responds_to, part_of, mentions, produced
    v
Quality Eval ────── Measures retrieval utilization (text overlap, trigrams, unigrams)
    |                  Tracks tool success, context waste, feeds ACAN training
    v
Memory Daemon ───── Worker thread extracts 9 knowledge types via LLM:
    |                  causal chains, monologues, concepts, corrections,
    |                  preferences, artifacts, decisions, skills, resolved memories
    v
Postflight ──────── Records orchestrator metrics (non-blocking)

Between Sessions

At session end, KongBrain runs a combined extraction pass: skill graduation, metacognitive reflection, causal chain consolidation, soul graduation check, and soul evolution. A handoff note is written so the next session wakes up knowing what happened.

At session start, a wake-up briefing is synthesized from the handoff, recent monologues, soul content (if graduated), and identity state, then injected as inner speech so the agent knows who it is and what it was doing.

<details> <summary><strong>Memory Daemon</strong>, background knowledge extraction</summary>

A worker thread running throughout the session. Batches turns every ~12K tokens, calls the configured LLM to extract:

  • Causal chains: trigger/outcome sequences with success/confidence
  • Monologue traces: thinking blocks that reveal problem-solving approach
  • Concepts: semantic nodes (architecture patterns, domain terms)
  • Corrections: user-provided fixes (importance: 9)
  • Preferences: behavioral rules learned from feedback
  • Artifacts: file paths created or modified
  • Decisions: important conclusions reached
  • Skills: multi-step procedures (if 5+ tool calls in session)
  • Resolved memories: completed tasks and confirmed facts
</details>

Tools

Three tools are registered for the LLM:

  • recall Search graph memory by query
  • core_memory Read/write persistent core directives (tiered: always-loaded vs session-pinned)
  • introspect Inspect database state, verify memory counts, run diagnostics, check graduation status, migrate workspace files

Performance

KongBrain is aggressively optimized for token efficiency and latency, informed by analysis of the Claude Code source.

DB Query Batching

All graph operations use batched multi-statement queries (queryBatch). A single assemble() call sends ~5 round-trips to SurrealDB instead of ~337 individual queries:

OperationBeforeAfter
vectorSearch (7 tables)7 queries1 batched
graphExpand (26 edge types x N nodes)130-208 queries1-2 batched (per hop)
queryCausalContext (8 edge types x N nodes)80-120 queries1-2 batched (per hop)

Token Estimation

Token counting is aligned with the Anthropic API's actual tokenizer characteristics:

  • 4 bytes/token for prose/code (not the common 3.2-3.5 underestimate)
  • 2 bytes/token for JSON content (denser single-char tokens)
  • 33% safety margin on aggregate estimates
  • 2000 tokens for images/documents (matching API billing)

Context Window Efficiency

Every turn, old messages are surgically stripped to save tokens while preserving recent context:

  • Thinking blocks replaced with [thinking] marker (saves 1-5k tokens each)
  • Old tool results content-cleared to stubs (saves 20-80k tokens/session)
  • Old assistant filler collapsed to first line (saves 5-15k/session)
  • Images in old messages replaced with [image] marker (saves 2k tokens each)
  • System prompt additions capped at 8% of context window with priority trimming

Structured Output

All internal LLM calls (memory extraction, cognitive checks, soul generation, skill extraction) use json_schema structured output when the provider supports it. This eliminates markdown fencing, preamble text, and parsing failures.

Embedding Reuse

User message embeddings computed at ingest time are stashed in session state and reused during context retrieval, eliminating 1-4 redundant BGE-M3 inference calls per turn.

Development

git clone https://github.com/42U/kongbrain.git
cd kongbrain
pnpm install
pnpm build
pnpm test

Link your local build to OpenClaw:

openclaw plugins install . --link

Then set plugins.slots.contextEngine to "kongbrain" in ~/.openclaw/openclaw.json and run openclaw.

Contributing

  1. Clone the repo and install dependencies (pnpm install)
  2. Make your changes
  3. Build (pnpm build) and run tests (pnpm test)
  4. Open a PR against master

The lobster doesn't accept contributions. The ape does.


<div align="center">

MIT License | Built by 42U

</div>