Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Memoria
v3.34.0The most advanced memory system for AI agents. 24 cognitive layers, knowledge graph, procedural learning, dialectic queries, AI self-observation, auto skill...
⭐ 2· 250·0 current·0 all-time
by@Nitix_@nieto42
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (local-first multi-layer memory) aligns with the code and declared providers: SQLite, Ollama and optional remote fallbacks (OpenAI/Anthropic/OpenRouter). However the feature list includes 'auto skill creation' and continuous real-time capture which are more powerful than a passive memory store — these are plausible for an advanced memory system, but they materially expand what the plugin can do (create/modify files, auto-configure OpenClaw). Review auto-skill.ts and any code that writes into the extensions/plugins directories.
Instruction Scope
SKILL.md and INSTALL.md instruct the operator to run a remote install script (curl | bash) and the plugin registers hooks that read conversation content and explicit workspace files (USER.md, COMPANY.md, projects/*), write memoria.db and optional markdown outputs, and migrate older DBs (cortex.db, facts.json). Those actions are consistent with the purpose but the install instructions and hooks give the plugin broad file access inside the OpenClaw workspace and can auto-configure openclaw.json. The README/SKILL.md warns about this, but the scope is significant and should be inspected.
Install Mechanism
INSTALL.md includes a curl -fsSL raw.githubusercontent.com ... | bash installer flow. raw.githubusercontent.com is a common host but piping a remote script to bash is high-risk — it runs arbitrary commands with the user's privileges. The rest of the install (pulling large local models via Ollama, cloning repo, npm install) is expected for a local LLM-first plugin, but the remote-script install path and its auto-configuration behavior raise risk and deserve manual review before use.
Credentials
Declared environment variables are optional and proportional: OPENAI_API_KEY and OPENROUTER_API_KEY as fallbacks make sense given the plugin's support for remote providers. OPENCLAW_WORKSPACE is runtime-provided. The plugin also reads files in the declared workspace (USER.md, COMPANY.md, projects/*) — that is justified by the 'identity-aware' features and is documented in SKILL.md/Security.md.
Persistence & Privilege
The skill does not request always:true, and uses standard plugin hooks (before_prompt_build, after_tool_call, agent_end, after_compaction). It writes a persistent DB (memoria.db) into the workspace and can auto-migrate older memory files. The 'auto skill creation' capability and the install script's auto-configure step mean the plugin may modify OpenClaw configuration and create files — expected for the stated functionality but worth auditing before granting the plugin persistent presence and autonomous invocation.
Scan Findings in Context
[pre-scan:no-findings] expected: The static pre-scan reported no injection signals. That does not eliminate risk: the package includes an install.sh referenced in INSTALL.md (download via raw.githubusercontent.com | bash) and runtime hooks that process conversation content and workspace files — manual review of those scripts and auto-configure logic is still recommended.
What to consider before installing
This plugin appears to implement a sophisticated local-first memory system and most of its dependencies (SQLite, Ollama, optional cloud fallbacks) match that purpose. However: 1) Do not run the curl | bash installer without inspecting it first — download the installer and read it line-by-line before execution. 2) Inspect auto-skill.ts and any code that writes to ~/.openclaw or modifies openclaw.json to confirm it only adds the plugin and does not alter unrelated plugins or system-wide configs. 3) Backup your OpenClaw workspace (memoria.db, cortex.db, facts.json, and openclaw.json) before enabling migration or automatic configuration. 4) If you are sensitive about data leakage, avoid supplying remote API keys (OPENAI_API_KEY, OPENROUTER_API_KEY) and prefer local models (Ollama/LM Studio). 5) Consider enabling the plugin in a test or isolated environment first (or a container/VM) to validate behavior and resource usage (model pulls, disk writes, DB migration) before deploying in production. If you want, I can fetch and highlight the specific lines in the installer or auto-skill code that write configs or create files so you can inspect them quickly.Like a lobster shell, security has layers — review code before you run it.
latestvk97egjdjy0670r0rzwm1bthfa9841sjzlocalvk979qkqy8ktknq4p34f37gqdzn83nn0fmemoryvk979qkqy8ktknq4p34f37gqdzn83nn0follamavk979qkqy8ktknq4p34f37gqdzn83nn0fpluginvk979qkqy8ktknq4p34f37gqdzn83nn0fsqlitevk979qkqy8ktknq4p34f37gqdzn83nn0f
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
Environment variables
OPENAI_API_KEYoptional— Optional — used as fallback for LLM extraction and embeddings when local models are unavailable. Never required for default operation.OPENROUTER_API_KEYoptional— Optional — used as fallback for remote LLM provider. Never required for default operation.OPENCLAW_WORKSPACEoptional— Auto-set by OpenClaw runtime — workspace path for memory files. Do not set manually.