Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Memory Optimization

v1.0.4

Comprehensive memory management optimization for AI agents. Use when: (1) Agent experiences context compression amnesia, (2) Need to rebuild context quickly...

0· 223·1 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (memory optimization, KG, TL;DRs, daily cleanup) align with the included scripts and docs (memory_ontology.py, kg_extractor.py, consolidation/decay engines). However the registry metadata claimed 'instruction-only / no required env vars', but the repo contains a full CLI toolset that clearly expects environment configuration (model / API endpoints, KG_DIR, etc.). That mismatch is unexpected and should be justified by the maintainer.
!
Instruction Scope
SKILL.md explicitly instructs agents to read local files at session start (SOUL.md, USER.md, memory/YYYY-MM-DD.md, MEMORY.md) and to run scripts that process agent session logs (kg_extractor.py --agents-dir agents/). Reading these files is plausible for a memory system, but they can contain sensitive identity/user preferences or other agents' data. The instructions also reference using LLM embedding/model settings (OPENAI_MODEL / OPENAI_BASE_URL) and running scripts that can batch-process directories — broad file access and batch processing of agents/ is within scope for a KG tool but increases risk if run without inspection.
Install Mechanism
There is no external install step (no network download spec). The repository includes many code files (scripts/*.py, shell scripts, tests) bundled with the skill. That means code will be present and executable in the user's workspace when installed — review the code before executing. No direct remote install was specified (good), but included scripts may themselves call external network APIs at runtime.
!
Credentials
Registry metadata lists no required environment variables, but multiple docs and scripts refer to environment configuration (OPENAI_MODEL, OPENAI_BASE_URL, KG_DIR, and options to pass api-key). The code includes an LLM client and embedding usage (entity deduplication, preference engine). Declaration mismatch (no required env vars) is problematic: the skill likely needs API keys and endpoints to function and may attempt network calls using those values. The CHANGELOG explicitly notes a prior 'CSO Audit' with '1 HIGH API key exposure' and '1 HIGH prompt injection risk', which suggests past or present sensitive handling of credentials/prompts.
Persistence & Privilege
The skill is not marked always:true and does not request special platform privileges in metadata. It does suggest creating/using shared KG files in ~/.openclaw/shared-kg and linking graph.jsonl, which means it expects persistent storage access in the user's home directory — reasonable for a memory/graph tool but be aware of the persistent file paths referenced.
What to consider before installing
This package implements a full memory/knowledge-graph toolset (many Python scripts and shell scripts) but the registry metadata understates its runtime needs. Before installing or running it: 1) Inspect scripts/utils/llm_client.py and any code that performs network I/O to identify which environment variables or API keys are actually required (do not supply real credentials until you review). 2) Search the repo for hard-coded keys, endpoints, or upload/exfil endpoints. The CHANGELOG explicitly mentions a prior security audit with HIGH severity findings (API key exposure, prompt injection) — treat that as a warning and ask the author for remediation or a clean release. 3) Run the code in a sandboxed environment (isolated VM or container) without sensitive files mounted; do not point it at ~/.openclaw, agents/, or other directories with secrets until you are confident. 4) If you plan to use embedding/LLM features, create a limited-scope API key (minimal privileges, cost limits) and rotate it after testing. 5) If you need this functionality but cannot audit the code yourself, prefer an alternative with clearer metadata and declared env requirements or request the maintainer to: declare required env vars, document network endpoints, remove hard-coded secrets, and provide a security-fix release.

Like a lobster shell, security has layers — review code before you run it.

contextvk974s6vnyx60g4n7afvffce0gn82vysgknowledge-graphvk974s6vnyx60g4n7afvffce0gn82vysglatestvk973ccj5w16jrwjvd84adcsqzh844jstmemoryvk974s6vnyx60g4n7afvffce0gn82vysgoptimizationvk974s6vnyx60g4n7afvffce0gn82vysgproductivityvk974s6vnyx60g4n7afvffce0gn82vysg

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments