Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

CompoundMind

v0.1.0

Experience distillation engine that turns raw daily memory logs into compounding intelligence. Extracts patterns, generates briefings, tracks growth metrics,...

0· 387·1 current·1 all-time
byCassh@cassh100k
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The code (compound_mind.py, distill.py, brief.py, index.py, growth.py) and SKILL.md align with the described purpose (distill memory logs, build a wisdom index, generate briefs, track growth). However the registry metadata you provided says 'Required env vars: none / Primary credential: none' while clawpkg.yaml and the code clearly expect an Anthropic API key (ANTHROPIC_API_KEY or COMPOUND_MIND_LLM_KEY) for LLM-based distillation/briefing. That metadata mismatch is an incoherence to surface.
!
Instruction Scope
SKILL.md and the code instruct the skill to read memory files from /root/.openclaw/workspace/memory/ and distill them using Claude Haiku (Anthropic). That means potentially sensitive local memory contents (wallet addresses, API keys, configs) will be sent to an external LLM provider when the LLM path is used. SKILL.md's claim 'Runs entirely local with minimal API calls' is misleading because distillation/briefing uses remote LLMs unless you explicitly avoid the LLM code paths. The packaged data/experiences already contain wallet addresses and env mentions, demonstrating the type of sensitive data that could be processed/transmitted.
Install Mechanism
There is no install spec (instruction-only style), so nothing is downloaded at install time — lower install risk. clawpkg.yaml lists 'anthropic' as a required Python package but no automated installer is provided. Note: the package contains many pre-populated data/experiences files (with potentially sensitive, personal operational data) which increases privacy exposure even before any runtime actions.
!
Credentials
The code expects an LLM API key (checks COMPOUND_MIND_LLM_KEY or ANTHROPIC_API_KEY) even though registry metadata declared no required env vars. Requesting an Anthropic key is proportionate to optional LLM features, but it becomes a privacy/credential risk because the skill will read user memory files and could send secrets to the external service. There are also embedded sensitive items in packaged experience files (wallet addresses, an inline mention of POLYMARKET_KEY stored in a script) — the skill does not request unrelated third-party credentials, but it will process and surface secrets found in the memory data.
Persistence & Privilege
always:false and no special OS restrictions; the skill does not declare 'always: true' and does not appear to modify other skills or system-wide agent settings. It persists data only in its own data/ subdirectory (briefs, experiences, wisdom.db). It suggests a cron example (user-installable) and a heartbeat hook in clawpkg.yaml, but these are optional and not grant-of-privilege features on their own.
What to consider before installing
This skill largely does what it says, but there are important privacy/incoherence issues to consider before installing or running it: - The package includes many pre-populated experience files with wallet addresses, env mentions (e.g., POLYMARKET_KEY) and other operational details. Treat those files as sensitive data and inspect/delete them before running. - Although SKILL.md claims 'runs entirely local', the distiller and optional LLM briefing paths call Anthropic (Claude Haiku). If you provide ANTHROPIC_API_KEY (or COMPOUND_MIND_LLM_KEY) the skill will send memory contents to a remote LLM. If you do not want that, do not set an LLM key and avoid the --llm flags (or audit distill.py/build_report_llm/build_briefing_llm to ensure no network calls occur). - Review distill.py to confirm exactly what text is sent to the LLM (full files vs redacted summaries). If the code sends raw memory files, remove or redact secrets from your memory store first. - If you want to use the skill: run it in an isolated sandbox, back up and/or remove sensitive memory files from /root/.openclaw/workspace/memory, and consider deleting the included data/experiences if they contain real operational secrets. - Prefer the rule-based (local) synthesis paths if you need local-only operation; verify those code paths never import anthropic or perform network I/O. If you want, I can (1) point out the exact lines in distill.py/brief.py that call the Anthropic API, (2) show where the code reads memory files so you can confirm what's transmitted, or (3) suggest a minimal patch to disable remote LLM calls and redact secrets before distillation.

Like a lobster shell, security has layers — review code before you run it.

agentsvk97cdmmnv83zwz8nc67fwrgk5n81v0gxintelligencevk97cdmmnv83zwz8nc67fwrgk5n81v0gxlatestvk97cdmmnv83zwz8nc67fwrgk5n81v0gxlearningvk97cdmmnv83zwz8nc67fwrgk5n81v0gxmemoryvk97cdmmnv83zwz8nc67fwrgk5n81v0gx

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments