Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Agent Memory System
v1.0.0Agent 记忆系统设计助手。构建长期记忆、短期记忆、情景记忆架构。触发词:记忆、memory、上下文管理、上下文窗口。
⭐ 0· 79·0 current·0 all-time
by@sky-lv
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name and description (agent memory design assistant) align with the provided implementation guidance: SQLite for long-term storage, a vector store (Chroma) for embeddings, short-term maps, and calls to an embeddings API. However, the SKILL.md contains concrete implementation code that implies runtime dependencies (Node runtime, SQLite bindings, Chroma client, crypto, fetch) and use of an OPENAI_API_KEY, none of which are declared in the skill metadata. This mismatch between declared requirements (none) and implied runtime needs is noteworthy.
Instruction Scope
The instructions include code that calls an external embeddings API (https://api.openai.com/v1/embeddings) using process.env.OPENAI_API_KEY, persists data to a SQLite DB and a local Chroma store, deletes vector entries, and constructs SQL queries by interpolating user-provided text (content LIKE '%${k}%') — which is vulnerable to SQL injection if executed as-is. The SKILL.md does not limit or sanitize what data gets stored; it instructs storing arbitrary 'content' and metadata, which could lead to sensitive data being persisted or sent to the external embedding endpoint.
Install Mechanism
This is an instruction-only skill with no install spec and no code files executed by the platform. That minimizes direct install risk. However, the guidance implies installing/using packages (SQLite client, Chroma client) if a developer implements it — those steps are not provided here.
Credentials
The implementation explicitly uses process.env.OPENAI_API_KEY but the skill declares no required environment variables. Requesting an API key would be proportionate for embeddings, but the omission in metadata is an inconsistency and a potential surprise to users. The skill would also require filesystem write access (SQLite DB and './chroma') — that access is not declared.
Persistence & Privilege
The skill is not always-enabled and is user-invocable, and does not request elevated platform privileges. The described behavior stores data to a local DB and vector store (its own data), but does not attempt to modify other skills or global settings.
What to consider before installing
This skill provides a coherent design and sample code for an agent memory system, but there are several red flags you should consider before using it:
- Missing credential declaration: The code calls the OpenAI embeddings endpoint using process.env.OPENAI_API_KEY, yet the skill metadata does not declare any required environment variables. If you supply an API key, the skill's instructions will send content (potentially sensitive) to OpenAI.
- Data persistence and exfiltration risk: The design persists arbitrary 'content' to a SQLite DB and a local Chroma vector store, and will send high-importance content to the external embeddings API. Review what you store and avoid putting secrets (passwords, tokens, PII) into memories.
- Insecure SQL: The sample code builds SQL with string interpolation (content LIKE '%${k}%'), which is vulnerable to SQL injection. If you implement this, use parameterized queries or proper sanitization.
- Missing dependency/runtime notes: The SKILL.md assumes a Node-like environment and libraries (SQLite client, Chroma client, fetch, crypto). The skill metadata doesn't list these — confirm the runtime and add explicit dependency/installation steps before implementation.
Recommendations: only use or implement this guidance if you (or a developer you trust) will:
1) Explicitly declare and protect the OPENAI_API_KEY and never store secrets in the memory store; 2) Replace string-interpolated SQL with parameterized queries; 3) Add clear dependency and install instructions; 4) Audit what data will be persisted and sent to external services; and 5) Verify the linked repository (skill.json references a GitHub repo) to inspect full source and history. If you cannot verify these items, treat the skill as potentially unsafe.Like a lobster shell, security has layers — review code before you run it.
latestvk97f61necx34h11qaedyxh7rf184hssf
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🧬 Clawdis
