Cognitive Memory

v1.0.8

Intelligent multi-store memory system with human-like encoding, consolidation, decay, and recall. Use when setting up agent memory, configuring remember/forget triggers, enabling sleep-time reflection, building knowledge graphs, or adding audit trails. Replaces basic flat-file memory with a cognitive architecture featuring episodic, semantic, procedural, and core memory stores. Supports multi-agent systems with shared read, gated write access model. Includes philosophical meta-reflection that deepens understanding over time. Covers MEMORY.md, episode logging, entity graphs, decay scoring, reflection cycles, evolution tracking, and system-wide audit.

27· 8.7k·69 current·74 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (multi-store memory, reflection, audit) aligns with the templates and scripts provided. It does not request external credentials or binaries, which is coherent. However, the skill both claims strict guardrails ("APPROVAL FIRST" for system file changes) and includes an init script that auto-initializes git and commits with Approval: auto — a mismatch between stated guardrails and the actual install-time behavior.
!
Instruction Scope
Runtime instructions tell the agent/operator to run local shell scripts, modify a global config (~/.clawdbot/clawdbot.json or moltbot.json), and append blocks to AGENTS.md. The SKILL.md also defines rules that the reflection engine must never read code/configs/transcripts, yet it instructs edits to system config files. The reflection flow includes invisible internal phases (prompting the model with system-style instructions), which increases the risk of prompt-injection or unexpected LLM behavior if not carefully controlled.
Install Mechanism
There is no external download/install spec (no network fetches), which reduces supply-chain risk. However, the package includes multiple shell scripts (init and upgrade) that will run local filesystem operations and create/commit a .git repository. The upgrade scripts are present but not shown here — you should inspect them before executing. The init script auto-commits files with an 'Approval: auto' message, which may contradict the skill's stated guardrails.
Credentials
The skill does not request any environment variables, credentials, or external tokens — appropriate for a local-memory helper. It does, however, instruct edits to user-level configuration files outside the workspace (e.g., ~/.clawdbot/clawdbot.json), which is within scope for a memory integration but still requires user consent and careful review.
Persistence & Privilege
The skill is not marked always:true and does not request elevated system privileges. It does create persistent artifacts (workspace files, memory files, a .git repo and audit log) and includes upgrade scripts that can modify those artifacts. Combined with autonomous model invocation being allowed by default on the platform, this increases the blast radius if the skill is later invoked without explicit human checks — review how and when the agent will be allowed to run these routines.
Scan Findings in Context
[system-prompt-override] expected: The package includes LLM 'System Prompt' style content (routing and reflection prompts) which legitimately looks like prompt templates for classification and internal phases. That matches the skill's purpose. Still, these same embedded system-level prompts are a recognized injection pattern and warrant extra scrutiny because they instruct the model's internal behavior (e.g., 'Phases 1-4 invisible', 'Return ONLY valid JSON', 'System Prompt').
What to consider before installing
Things to check before running or installing this skill: - Review the scripts first. Open scripts/init_memory.sh and the upgrade_to_*.sh files and verify every line — they create directories, copy templates, and (critically) initialize and commit a git repo with an 'Approval: auto' commit. If you want manual approval for system changes, remove or edit the auto-commit behavior. - Backup your workspace/configs. The init script writes files into whatever workspace you point it at and may suggest edits to ~/.clawdbot/clawdbot.json. Run init in an isolated test folder first, not directly in your real home/work config. - Inspect upgrade scripts before running. Upgrade scripts can modify or overwrite files. Only run them if you understand the changes they will make. - Consider the config change request carefully. The quick-setup asks you to add memorySearch settings to ~/.clawdbot/clawdbot.json (or moltbot.json). Editing global agent configs is plausible for this feature, but confirm you want that change and prefer to make it manually so you control exact contents. - Be cautious about embedded prompts. The skill contains extensive system-level prompt templates and explicit instructions for invisible internal phases. These are needed for the reflection architecture but are also a common vector for prompt-injection or unexpected model behavior. If you rely on this skill, constrain when it can run and review prompt templates. - Decide who can approve reflections and writes. The design encourages the agent to request 'tokens' and self-penalize/reward; ensure your human-in-the-loop process is enforced so the agent cannot autonomously run reflection+writes without your explicit approval. If you want, I can parse and summarize the contents of the upgrade scripts or walk through the exact lines in init_memory.sh and point out any specific commands you might want to change before running.

Like a lobster shell, security has layers — review code before you run it.

Cognitivevk978eyqwy5w57dp3k1w6rqp0m180faxjMemoryvk978eyqwy5w57dp3k1w6rqp0m180faxjlatestvk972ara6vj411fdffd80nn2s0980hw0mself-evolvingvk978eyqwy5w57dp3k1w6rqp0m180faxj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments