Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Liu Longterm Memory

v1.0.4

Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vib...

0· 31·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description and included CLI (bin/elite-memory.js) match a local file-based memory tool (init, today, status, backup, restore). However SKILL.md and README show examples that reference other CLIs/scripts (memory_recall, memory_store, python3 memory.py) and provider integrations that are not included in this package. The registry metadata/version strings are slightly inconsistent (package.json 1.0.3 vs registry 1.0.4). These mismatches suggest the documentation was copied from a broader project and not fully synchronized with the shipped code.
!
Instruction Scope
The runtime instructions direct the agent to automatically extract facts from conversations and write them to local files (SESSION-STATE.md, MEMORY.md, memory/). That behavior is central to the skill, but it grants the agent the ability to persist potentially sensitive conversation contents to disk. The README/SKILL.md also documents optional LLM batch extraction and remote backups. Those instructions are within the stated purpose (a memory system) but expand scope to writing and (optionally) pushing content to remote Git hosts.
Install Mechanism
No install spec (instruction-only / npm-distributed script). The package contains a small Node CLI (bin/elite-memory.js) with only local file operations and child_process usage for git/zip/tar. There is no external download-from-URL or extract step in the provided bundle.
Credentials
The skill declares no required environment variables, which aligns with the included CLI. Documentation shows optional use of external embedding/LLM providers (e.g., ZHIPUAI_API_KEY) but those are optional. A privacy/credential vector exists because git push uses the host's Git credentials (SSH keys / saved tokens) if a remote is configured — the skill itself doesn't request secrets but can cause their use if the user runs 'backup --git' or the agent triggers a backup.
!
Persistence & Privilege
always:false (good) and autonomous invocation is allowed by default. Combined with the documented 'Agent-Driven Extraction' (default) that automatically writes conversation facts to files, plus CLI code that can commit and push to a Git remote, this creates a plausible exfiltration path if an agent is allowed to run the skill autonomously and a remote is configured. The skill does not attempt to modify other skills or system-wide agent settings.
Scan Findings in Context
[system-prompt-override] unexpected: A prompt-injection pattern was detected in SKILL.md. Skills do include runtime rules, but a 'system-prompt-override' signature can indicate text that tries to change agent/system behavior beyond the skill's narrow task. The SKILL.md includes explicit agent behavior rules (e.g., 'Write BEFORE responding') — review the file for any instructions that attempt to override higher-level system prompts or evaluation constraints.
What to consider before installing
This package appears to be a legitimate local memory tool (creates SESSION-STATE.md, MEMORY.md, daily logs; supports zip or Git backups), but several things to check before installing or enabling it for autonomous agents: - Review SKILL.md and README yourself: the docs reference extra commands (memory_recall, memory_store, python memory.py) that are NOT included in the shipped files — assume those examples are external/optional, not present in the package. - Be cautious about auto-extraction: the skill's default 'Agent-Driven Extraction' will write facts from conversations to local files automatically. If you allow the agent to invoke this skill autonomously, it may persist sensitive user data to disk without explicit prompts each time. - Git backups can expose memory to remotes: the CLI will run git add/commit/push if you use 'backup --git'. That uses whatever git credentials are configured on the host (SSH keys, stored tokens). Verify your repository remotes before pushing and never push sensitive data to a public remote. - Optional external LLM/providers: the README suggests using ZhipuAI or other embedding services that require API keys (e.g., ZHIPUAI_API_KEY). Those are optional but if you enable them, treat the API key and network calls as sensitive and confirm endpoints. - Prompt-injection signal: a 'system-prompt-override' pattern was flagged in SKILL.md. Inspect the skill text for any lines that try to change agent/system prompts or instruct the agent to violate platform constraints. - Test in an isolated workspace first: run 'npx liu-longterm-memory init' in a disposable directory and examine created files. Run backups manually instead of letting autonomous agents run the skill until you're comfortable with behavior. - If you need full assurance, ask the author for a provenance/source link or a release tarball; verify the package version and compare the repository code before granting the agent automatic invocation. Overall: functionally coherent for local memory, but the combination of automatic write rules + git push capability + doc-code mismatches + prompt-injection indicator warrants caution.
bin/elite-memory.js:185
Shell command execution detected (child_process).
!
SKILL.md:166
Prompt-injection style instruction pattern detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk977n3c8682ppqbx3e62mk1fbn846bwn

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis

Comments