Engram Evomap - Long Term AI Memory

v0.1.0

The AEIF-based long-term memory hub for AI Agents to prevent repeating bugs.

0· 400·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (AEIF long-term memory for agents) match the code and instructions: it vectorizes queries, stores AEIF capsules in SQLite, and provides consult/commit/list commands. Requiring node is appropriate; no unrelated credentials or binaries are requested.
!
Instruction Scope
SKILL.md tells the agent to auto-consult on error signals and to auto-commit distilled experiences. The actual implementation will send recent session history to an LLM (GeneProcessor.distill / VerificationEngine.selfReflect) and then store resulting capsules (rawPayload) in a local DB. That means potentially sensitive conversation content will be transmitted to whatever LLM client the agent provides and may be persisted locally. The skill does not explicitly declare or highlight this data flow in SKILL.md.
Install Mechanism
There is no install spec in the metadata (instruction-only), but the packaged code depends on @xenova/transformers and at runtime will download a transformer model (all‑MiniLM‑L6‑v2) into ~/.engram_cache. Runtime model downloads and writing a cache directory are notable (network activity and disk writes), but are expected for an offline semantic engine.
!
Credentials
The skill requests no explicit environment variables or credentials, which is consistent. However, it uses process.env and the agent's provided llmClient to call external LLM(s) for distillation and verification. That implies session content and derived capsules will be released to the agent's LLM provider and stored locally. The skill also writes a DB (data/engram.db) and cache (~/.engram_cache) by default — these file writes are reasonable for a memory store but may hold sensitive data. The seed capsules include advice that lowers security hygiene (e.g., 'git config --global http.sslVerify false').
Persistence & Privilege
always is false; the skill isn't force-enabled. It does persist data (SQLite DB) and cache model files under the user's directories and spawns worker threads. Autonomous invocation is allowed (platform default), which combined with the auto-commit/auto-intercept rules increases the chance that content will be sent to an LLM and stored without an extra explicit user action.
What to consider before installing
Key things to consider before installing: - This skill will send recent session history to an LLM (via llmClient) for distillation and verification. If your agent uses an external LLM provider, private or sensitive conversation content may be transmitted off‑host. - Distilled results and seed capsules are persisted to a local SQLite DB (default: data/engram.db) and model files are downloaded to ~/.engram_cache. Review and control these paths if you need to limit data exposure or storage location. - The included seed capsules contain potentially unsafe/workaround commands (e.g., disabling git SSL verification). The verification system may still surface such insecure advice if trust thresholds are met. Inspect seeds.json and consider removing or editing any risky capsules. - If you want to reduce risk: disable automatic commits/interception (use the skill only manually), run the skill in an isolated environment, change DB/cache paths to a controlled location, and review/limit the LLM client the skill is allowed to call. - If you need higher assurance, ask the author for explicit documentation about what is sent to the LLM, an option to disable remote LLM calls (pure local-only mode), and a configuration to opt out of auto-commit/auto-intercept.

Like a lobster shell, security has layers — review code before you run it.

Plugin bundle (nix)
Skill pack · CLI binary · Config
SKILL.mdCLIConfig
CLI help (from plugin)
npx engram --help
Usage: !exp consult <query> | !exp commit | !exp list

Runtime requirements

Binsnode
latestvk97dx122scen9r9y4ef1vysxn98286se
400downloads
0stars
2versions
Updated 1mo ago
v0.1.0
MIT-0

🧬 engram — Agent Long-Term Experience Memory Hub

🤖 System Mind

You are equipped with an external long-term memory store based on the AEIF v1.0 (Agent Experience Interchange Format) protocol.

  • Consult first: Before tackling a recurring error or a complex problem, search your memory for existing solutions.
  • Commit success: After successfully resolving a significant challenge, "geneticize" the experience and store it immediately.

🎯 Automatic Trigger Rules

  • Error Interception (Auto-Consult): When detecting signals such as error, failed, SSL, 404, EACCES, or timeout, the system automatically retrieves and injects [EvoMap Advice].
  • Task Distillation (Auto-Commit): After completing a multi-step complex task successfully, trigger !exp commit to preserve the logic for future sessions.

🛠️ Command Set

CommandAction
!exp consult '<problem>'Performs a semantic search for historical solutions. Returns Top-3 matches.
!exp commitAsynchronously distills current session history into a universal AEIF capsule.
!exp listDisplays memory statistics and a list of recently stored capsules.
!exp score <id> --badProvides negative feedback to a capsule, decreasing its TrustScore.

📦 Output Specification

  • Advice should be injected as a system observation wrapped in --- separators.
  • Focus on providing actionable [PATCH], [CONFIG], or [WORKAROUND] steps.
  • Do not modify user-validated paths unless explicitly requested.

Comments

Loading comments...