Openclaw Memory
v2.1.0Agent memory with ALMA meta-learning, LLM fact extraction, and full-text search. Observer calls remote LLM APIs (OpenAI/Anthropic/Gemini). ALMA and Indexer w...
⭐ 0· 379·0 current·0 all-time
byArtale@arosstale
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
Name/description (ALMA, Observer, Indexer) match the included code: ALMA evolves designs offline, Indexer reads workspace Markdown, and Observer calls remote LLM APIs. The code and README are consistent with the stated purpose.
Instruction Scope
SKILL.md and the code limit scope to: (a) offline ALMA and Indexer that use an in‑memory DB and workspace Markdown files, and (b) an Observer that sends sanitized conversation content to OpenAI/Anthropic/Gemini endpoints. The Observer will transmit conversation content to external LLMs (expected for extraction) — this is functionalityally appropriate but has privacy/exfiltration implications that users should consider.
Install Mechanism
No install spec in registry (instruction-only skill). Package.json and source are present but there are no runtime dependencies and no external download/install steps that would write arbitrary code to disk beyond a normal npm install. This is low risk from an installer perspective.
Credentials
SKILL.md and observer.ts require an LLM API key (OPENAI_API_KEY, ANTHROPIC_API_KEY, or apiKey in config), but the registry metadata lists no required environment variables or primary credential. The Observer legitimately needs an API key, so the metadata omission is an incoherence and a risk: users may not be warned that conversational data can be sent to third‑party LLMs and that keys are necessary.
Persistence & Privilege
always is false; the skill does not request permanent platform presence or modify other skills/config. The Indexer reads workspace files but uses an in‑memory DB and does not write persistent system‑wide config. No elevated privileges requested.
Scan Findings in Context
[pre-scan-injection-signals-none] expected: Static pre-scan reported no hidden Unicode/prompt-injection characters. The package includes a scanner module (scanner.ts) that detects such characters — inclusion of that scanner is expected and reasonable.
[llm-api-calls] expected: observer.ts issues POSTs to openai.com, api.anthropic.com, and generativelanguage.googleapis.com. This matches the Observer's claimed purpose (extracting facts via remote LLMs).
What to consider before installing
What to consider before installing:
- The Observer component will send conversation text to third-party LLM endpoints (OpenAI, Anthropic, Gemini) and therefore may expose sensitive or private conversation content. Only enable it if you trust the target LLM and are comfortable with that data flow.
- You must supply an API key (OPENAI_API_KEY / ANTHROPIC_API_KEY or pass apiKey in config) for the Observer to work. The registry metadata does NOT declare these required env vars — do not assume the skill is fully offline.
- If you don't want networked extraction, you can still use ALMA and the Indexer offline; disable or avoid instantiating ObserverAgent.
- Prefer giving a least-privileged key (limited scope or separate account) if possible, and audit any logs at your LLM provider for unexpected requests.
- The Indexer reads specific workspace Markdown paths (MEMORY.md, memory/YYYY-MM-DD.md, bank/entities/*.md, bank/opinions.md). Ensure those files don't contain secrets you don't want included in search or sent to the Observer.
- The codebase is small and matches its README, but the metadata mismatch about required credentials is a real coherence issue — consider asking the publisher to update the skill manifest to declare the required env vars before installing.Like a lobster shell, security has layers — review code before you run it.
latestvk973y7p1m2kyn3mb3hk6x38zhx81vt4r
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
