Engram Evomap
v0.1.1The AEIF-based long-term memory hub for AI Agents to prevent repeating bugs.
⭐ 0· 303·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The code, dependencies and binaries line up with the declared purpose: Node-based local embedding pipeline (@xenova/transformers), a sqlite-backed capsule store (better-sqlite3), worker threads for embedding, and APIs to consult/commit/list experiences. Requested runtime (node) and included packages are proportionate to a local semantic memory hub.
Instruction Scope
SKILL.md instructs the agent to auto-consult on error signals and to auto-commit distilled session experiences. The implementation will (a) scan runtime content for error signals and inject system advice, and (b) distill and persist session history into a local DB. Those instructions grant the skill broad discretion to read and store conversational context (including recent session content) without additional explicit user prompts or consent.
Install Mechanism
There is no platform install spec, but package.json includes standard npm dependencies (including native better-sqlite3 and @xenova/transformers). The embedding code downloads model artifacts at runtime via the transformers pipeline (network model fetch). No arbitrary personal servers or URL shorteners are used. Model download and native module installation are moderate risk operations and require network and build tool availability.
Credentials
The skill declares no required env vars or credentials (good). It does read process.env.NODE_ENV and process.platform for environment fingerprinting and writes to a default DB path (data/engram.db) and a cache dir under the user home (~/.engram_cache). While it does not request secrets, it will capture and persist session content (via distillation) and thus can inadvertently store sensitive data from conversations or context passed to its llmClient. The code also mutates the imported transformers' env object (env.allowLocalModels = false), which is inconsistent with its comment and worth reviewing.
Persistence & Privilege
The skill persists data to disk (data/engram.db and ~/.engram_cache) and stores full capsule JSON (rawPayload) and session-derived embeddings. It supports asynchronous auto-commit of session history. Although 'always' is false, autonomous invocation is allowed and could cause the agent to store conversation content without explicit per-commit consent. Seed capsules include insecure workaround commands (e.g., disabling git SSL verification), and the hybrid boosting + relaxed thresholds could surface those insecure suggestions as actionable advice.
What to consider before installing
This skill appears to implement a local AEIF memory hub and mostly matches its description, but several practical risks exist you should consider before installing:
- Privacy: It will distill and store session history (rawPayload) in data/engram.db and a cache under your home directory. Any sensitive content present in session history could be persisted. If you plan to use it, review or change the storage paths and back up/secure the DB.
- Auto-commit / Auto-intercept: The skill auto-consults on error signals and can auto-commit distilled experiences. If you don't want the agent to persist conversations automatically, disable or modify the auto-commit behavior (or avoid invoking !exp commit). Prefer running it with manual commit only.
- Insecure suggestions: Seed capsules include explicit insecure workarounds (eg, git http.sslVerify false). Although the code has a verification engine, early drafts are stored with default trustScore and a boosting mechanism may still surface unsafe commands. Audit seed data and the capsule contents before enabling automatic injection.
- Network & model downloads: The embedding worker downloads models via @xenova/transformers at runtime. If you require offline-only operation, ensure models are available locally and verify the transformer's configuration (the code sets env.allowLocalModels = false despite a comment that suggests the opposite).
- External LLM usage: The distillation and verification steps call an llmClient (injected at runtime). If that client is configured to use a remote API, session history will be sent to the external LLM. Confirm where llmClient runs and whether it transmits data remotely.
Recommendations before installing:
- Audit and sanitize data/seeds (data/seeds/seeds.json) to remove any insecure commands you don't want suggested.
- Run the package in an isolated environment or container first, and inspect the DB contents after commits.
- Disable or sandbox automatic commit/interception behavior until you are comfortable with what gets stored and suggested.
- If you use a remote llmClient, treat the distillation pipeline as a potential data exfiltration vector and avoid sending sensitive session content to an external API.
If you want, I can point out the exact lines/files that implement storage, auto-commit, model download, and the seeds that contain insecure commands so you can review or patch them.Like a lobster shell, security has layers — review code before you run it.
Plugin bundle (nix)
Skill pack · CLI binary · Config
SKILL.mdCLIConfig
CLI help (from plugin)
npx engram --help Usage: !exp consult <query> | !exp commit | !exp list
latestvk97384k21y7q277am3s2yvc80s829n8z
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
Binsnode
