Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Supermemory

v0.2.1

Long-term agent memory with atomic fact extraction, relational versioning, semantic search, and entity profiles. Extracts facts from conversations, tracks ho...

0· 178·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (local-first long-term memory, fact extraction, semantic search) matches the SKILL.md functionality (ingest/search/entity/profile, local SQLite DB, embeddings). However, the registry declares no required env vars or config paths while the SKILL.md explicitly requires an LLM API key (default Anthropic) or local embedding models and instructs creating ~/.supermemory/memory.db and installing openclaw-supermemory via pip — this metadata mismatch is notable.
!
Instruction Scope
The instructions tell the agent to run a local server (supermemory serve), write a persistent DB at ~/.supermemory/memory.db, and repeatedly call ingest on agent responses (supermemory ingest "$RESPONSE_TEXT"). Those steps are within a memory tool's remit, but auto-ingesting agent outputs means data (including sensitive or secret-bearing responses) will be processed by an LLM extractor and stored locally; if the extractor uses a cloud LLM API, that transmits data off-device. The SKILL.md also references several LLM providers (Anthropic/OpenAI/Cohere) but the registry did not declare those env requirements.
Install Mechanism
There is no platform install spec, but SKILL.md instructs users to pip install openclaw-supermemory[local] (PyPI) and references a GitHub repo and plugin. That means installation relies on external package distribution (PyPI/GitHub). The absence of an install spec in the registry is an inconsistency: the skill will not be installed automatically by the platform and an end user would fetch code from third-party sources — review the PyPI package and GitHub repo before running pip.
!
Credentials
Registry metadata lists no required env vars, but the documentation requires an LLM API key (ANTHROPIC_API_KEY by example) or alternative provider keys for embeddings/extraction. The skill may need additional credentials depending on chosen backends (OpenAI/Cohere/Voyage). Persistent storage of extracted facts in a home-directory DB means sensitive data may be retained locally. The lack of declared env/config requirements in the registry is disproportionate and should be corrected.
Persistence & Privilege
always:false (good). The skill writes a persistent DB (~/.supermemory/memory.db) and can run a local HTTP API on :8642; those are reasonable for a memory service but increase blast radius (stored sensitive info, an open local endpoint). The skill also suggests installing a 'plugin' that enables zero-config auto-injection — that plugin should be audited before enabling automatic operation across agents.
What to consider before installing
Before installing or enabling this skill: (1) Verify the PyPI package and GitHub repository contents — run pip install only from trusted sources and inspect the code. (2) Expect a local DB at ~/.supermemory/memory.db — if you handle sensitive data, consider encrypting the DB or disabling auto-ingest. (3) The SKILL.md requires an LLM API key (Anthropic by default) and may use other provider keys; do not supply high-privilege or long-lived secrets — use least-privilege/test keys. (4) Running 'supermemory serve' exposes a local HTTP API — ensure appropriate firewall/access controls. (5) Disable automatic ingestion of agent responses until you've audited what the extractor sends to external LLMs (avoid unintended data exfiltration). (6) Ask the publisher to correct registry metadata to declare required env vars and config paths; if they cannot, treat the skill as higher risk. If you need to proceed, run the package in an isolated environment (VM/container) and review network traffic and code first.

Like a lobster shell, security has layers — review code before you run it.

latestvk978fvqdhejhn32m5xbj2wjs1983jyf3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments