Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Cognitive Memory Temp

v1.0.0

Intelligent multi-store memory system with human-like encoding, consolidation, decay, and recall. Use when setting up agent memory, configuring remember/forg...

0· 170·0 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to be instruction-only memory tooling, which fits the templates and reflection process, but it also ships runnable shell scripts that initialize a workspace and run git commands. The init script runs git init, git add -A and commits — which can capture arbitrary files from whatever directory you run it in. Committing the entire workspace and using 'Approval: auto' in the commit message is disproportionate to 'setting up memory' (a safer design would only copy templates and commit those files).
!
Instruction Scope
SKILL.md instructs agents to monitor every user message for triggers, self-edit core memory mid-conversation, run multi-phase internal reflections, and read/write many memory files. It also contains many explicit LLM prompts (routing, reflection phases). A static scan flagged 'system-prompt-override' patterns — the skill embeds system-level prompts which increases prompt-injection risk and could change agent behavior if followed without guardrails. The doc does say 'NEVER read code/configs/transcripts', but the included scripts still perform powerful disk operations when run.
Install Mechanism
There is no external download/install spec (no network fetches), which reduces supply-chain risk. However, the package includes multiple shell scripts that will be executed by the user to create files, initialize git, copy templates, and modify JSON. Those scripts write to disk and run git; they are local but still require review before running.
Credentials
The skill requests no environment variables, no external credentials, and declares no config paths. That matches the described on-disk memory workspace behavior and is proportional to its stated purpose.
!
Persistence & Privilege
always:false (good), but the scripts and instructions give the skill strong file-write power in the chosen workspace and include automated git commits with 'Approval: auto'. If run in an unsafe directory or executed by an automated agent, this could alter or record files beyond the memory store. The skill also encourages autonomous monitoring of messages and self-editing of core memory, increasing impact if invoked without careful user approval.
Scan Findings in Context
[system-prompt-override] expected: The skill includes many LLM system prompts (routing prompts, reflection-phase prompts). That pattern is expected for an architecture that uses internal LLM classification and reflection cycles, but it raises prompt-injection and behavior-control risk because those prompts can change how the agent reasons or what it reveals. Treat these embedded prompts as part of the attack surface and audit them.
What to consider before installing
This skill implements a sophisticated on-disk memory system and includes scripts that create files and initialize a git repo. Before installing or running anything: (1) Inspect the scripts (scripts/init_memory.sh and upgrade scripts); they run git init and git add -A then commit — if you run them in your home or a project directory you may inadvertently add sensitive files to the new repo. Run init_memory.sh only with an explicit, dedicated workspace path (e.g., an empty directory you control). (2) Consider editing the init/upgrade scripts to limit git operations (avoid git add -A; commit only the created memory files), remove 'Approval: auto' behavior, and confirm backups. (3) Review the embedded prompts/routing logic: the skill contains system-like prompts that can change agent behavior or escalate prompt-injection risk; only enable autonomous monitoring if you trust the skill author and have tested it in isolation. (4) If you cannot thoroughly audit the scripts and prompts, run the skill in an isolated container or dedicated workspace, or decline installing it. Additional useful checks: ensure the agent asks for explicit approval before reflections or system-file changes, and verify that pending-memories proposals require manual commit by the main agent.
!
references/architecture.md:1009
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk977vj246frh2f4e5xbpzjzf49833gmk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments