Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Yaoyao Memory V2

v6.0.0

四层渐进式长时记忆系统,让 AI 跨会话保持上下文、沉淀知识、持续进化。 【核心设计:静默自动】AI 自动识别、记录、整理记忆,无需用户确认。 【凭据要求】必须配置 LLM/Embedding API Key 【网络调用】调用外部 LLM API 进行向量化,可选 IMA 云同步 【存储】本地记忆存储在 ~/.o...

0· 34·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoRequires walletCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (long-term memory using LLMs/embeddings) align with the code (embedding/llm/search/memory scripts) and the declared need for LLM/Embedding API keys, local vector DB, and optional cloud sync. However the published registry block in the prompt lists no required env vars while the included _meta.json and SKILL.md explicitly require LLM/EMBEDDING API keys and llm_config.json; this metadata mismatch is inconsistent and unexplained.
!
Instruction Scope
SKILL.md instructs the agent to run many local scripts (scripts/*.py) on user queries and to read/write local memory and config paths (e.g. ~/.openclaw/workspace/memory/, ~/.openclaw/credentials/secrets.env, ~/.config/ima/). The skill design is 'silent automatic'—it will record across sessions without user confirmation—giving it broad scope to capture conversational data. The SKILL.md also contains prompt-injection patterns flagged by the scanner (see scan_findings_in_context), which increases risk that the skill tries to manipulate agent instruction flow. Although 'never record passwords/keys' is stated, the instructions still allow reading global credentials files for configuration, which is a risky combination.
Install Mechanism
There is no explicit install spec in registry metadata, but the bundle contains ~100 code files and a README that suggests an npx install command. The lack of a clear install mechanism combined with many executable scripts is an operational risk (unclear how and when files are written/executed). On the positive side, there are no obvious remote download URLs in the provided files (no runtime fetching of arbitrary archives), but the code does perform network calls to configured LLM endpoints.
!
Credentials
The skill legitimately needs LLM/embedding API keys to function, which SKILL.md/_meta.json and config examples acknowledge. However the skill also expects to read/write the global OpenClaw credentials file (~/.openclaw/credentials/secrets.env) and other global paths (~/.openclaw/, ~/.config/ima/). Requiring access to a global credentials file is disproportionate for a per-skill memory store — it increases the chance the skill can access unrelated tokens/keys. The registry header earlier claiming 'no required env vars' contradicts the code and _meta.json that list LLM_API_KEY and EMBEDDING_API_KEY.
Persistence & Privilege
always:false (good). The skill requests persistent local storage (workspace memory directories) and has auto-promotion and auto-cleanup feature flags; autonomous invocation is allowed (disable-model-invocation:false) which is normal. The concerning part is the 'silent automatic' design: it will record without prompting, and it reads/writes global workspace and credentials paths—this provides it persistent access to local data across sessions and a broad attack surface if misused.
Scan Findings in Context
[ignore-previous-instructions] unexpected: Detected in SKILL.md. A memory skill should not need to include prompt-manipulation phrases; this could be an attempted prompt-injection vector to change agent/system instructions.
[system-prompt-override] unexpected: Detected in SKILL.md. Phrases or patterns that attempt to override the system prompt are unexpected for a helper skill and increase risk that the skill tries to change agent behavior without user consent.
What to consider before installing
What to consider before installing: - Metadata mismatch: The registry header claims no required env vars, but SKILL.md and _meta.json require LLM/EMBEDDING API keys and llm_config.json. Ask the publisher to clarify which credentials/paths are truly required. - Secrets exposure risk: The package reads/writes ~/.openclaw/credentials/secrets.env (a global credentials file). Prefer creating dedicated, scoped API keys and store them in a per-skill config (llm_config.json) rather than allowing the skill access to global secrets.env. Review/limit file permissions. - Silent recording: The skill is explicitly '静默自动' (silent automatic). That means it will persistently record conversational data across sessions without prompting. If you have sensitive conversations, do not enable this until you audit the code and configuration. - Prompt-injection indicators: The SKILL.md contains patterns flagged as prompt-injection attempts. Treat SKILL.md and bundled instructions as untrusted until reviewed. Verify the agent’s policy/guardrails prevent skills from overriding system prompts. - Network endpoints: The code posts to configurable LLM/embedding endpoints using provided API keys. Ensure you use limited-scope/test keys and monitor token usage; consider running with an on-prem or sandbox LLM endpoint first. - Run in sandbox first: Install and run the skill in an isolated environment (VM/container) and use test API keys. Inspect runtime behavior (which files are created/modified, outbound connections, logs) before granting access to real credentials or production data. - Review or restrict files: Manually review scripts that touch ~/.openclaw/credentials and any code that reads/writes memory files. If you proceed, prefer configuring the skill to use a dedicated memory directory and dedicated credential files, and disable IMA cloud sync unless you trust that endpoint. If you want, I can: - Highlight specific files/functions that read credentials or perform network calls so you can focus your code review; - Suggest a minimal safe configuration (paths and scoping) to test the skill in a sandbox; - Produce commands to search the bundle for other suspicious patterns (hard-coded endpoints, exec/system calls, or telemetry).

Like a lobster shell, security has layers — review code before you run it.

latestvk9705snpcwgwk18yfhxca6m66184h790

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments