Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Supermemory

v0.2.1

Long-term agent memory with atomic fact extraction, relational versioning, semantic search, and entity profiles. Extracts facts from conversations, tracks ho...

0· 355·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jared-goering/openclaw-supermemory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Supermemory" (jared-goering/openclaw-supermemory) from ClawHub.
Skill page: https://clawhub.ai/jared-goering/openclaw-supermemory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openclaw-supermemory

ClawHub CLI

Package manager switcher

npx clawhub@latest install openclaw-supermemory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (local-first long-term memory, fact extraction, semantic search) matches the SKILL.md functionality (ingest/search/entity/profile, local SQLite DB, embeddings). However, the registry declares no required env vars or config paths while the SKILL.md explicitly requires an LLM API key (default Anthropic) or local embedding models and instructs creating ~/.supermemory/memory.db and installing openclaw-supermemory via pip — this metadata mismatch is notable.
!
Instruction Scope
The instructions tell the agent to run a local server (supermemory serve), write a persistent DB at ~/.supermemory/memory.db, and repeatedly call ingest on agent responses (supermemory ingest "$RESPONSE_TEXT"). Those steps are within a memory tool's remit, but auto-ingesting agent outputs means data (including sensitive or secret-bearing responses) will be processed by an LLM extractor and stored locally; if the extractor uses a cloud LLM API, that transmits data off-device. The SKILL.md also references several LLM providers (Anthropic/OpenAI/Cohere) but the registry did not declare those env requirements.
Install Mechanism
There is no platform install spec, but SKILL.md instructs users to pip install openclaw-supermemory[local] (PyPI) and references a GitHub repo and plugin. That means installation relies on external package distribution (PyPI/GitHub). The absence of an install spec in the registry is an inconsistency: the skill will not be installed automatically by the platform and an end user would fetch code from third-party sources — review the PyPI package and GitHub repo before running pip.
!
Credentials
Registry metadata lists no required env vars, but the documentation requires an LLM API key (ANTHROPIC_API_KEY by example) or alternative provider keys for embeddings/extraction. The skill may need additional credentials depending on chosen backends (OpenAI/Cohere/Voyage). Persistent storage of extracted facts in a home-directory DB means sensitive data may be retained locally. The lack of declared env/config requirements in the registry is disproportionate and should be corrected.
Persistence & Privilege
always:false (good). The skill writes a persistent DB (~/.supermemory/memory.db) and can run a local HTTP API on :8642; those are reasonable for a memory service but increase blast radius (stored sensitive info, an open local endpoint). The skill also suggests installing a 'plugin' that enables zero-config auto-injection — that plugin should be audited before enabling automatic operation across agents.
What to consider before installing
Before installing or enabling this skill: (1) Verify the PyPI package and GitHub repository contents — run pip install only from trusted sources and inspect the code. (2) Expect a local DB at ~/.supermemory/memory.db — if you handle sensitive data, consider encrypting the DB or disabling auto-ingest. (3) The SKILL.md requires an LLM API key (Anthropic by default) and may use other provider keys; do not supply high-privilege or long-lived secrets — use least-privilege/test keys. (4) Running 'supermemory serve' exposes a local HTTP API — ensure appropriate firewall/access controls. (5) Disable automatic ingestion of agent responses until you've audited what the extractor sends to external LLMs (avoid unintended data exfiltration). (6) Ask the publisher to correct registry metadata to declare required env vars and config paths; if they cannot, treat the skill as higher risk. If you need to proceed, run the package in an isolated environment (VM/container) and review network traffic and code first.

Like a lobster shell, security has layers — review code before you run it.

latestvk978fvqdhejhn32m5xbj2wjs1983jyf3
355downloads
0stars
1versions
Updated 1mo ago
v0.2.1
MIT-0

Supermemory

Long-term memory for AI agents. Extracts atomic facts from text, tracks relations between memories (updates, contradicts, extends), embeds locally for semantic search, and auto-builds entity profiles.

Setup

pip install openclaw-supermemory[local]
supermemory init        # creates ~/.supermemory/memory.db
supermemory serve       # starts API on :8642

Requires an LLM API key for fact extraction (default: Anthropic Haiku).

export ANTHROPIC_API_KEY=sk-...
# or configure via ~/.supermemory/config.yaml

Commands

Ingest (extract facts from text)

supermemory ingest "The project deadline moved to April 15. Sarah replaced Tom as lead." \
  --session meeting-notes --agent kit

LLM extracts atomic facts, categorizes them (person, decision, event, insight, preference, project), detects entities, and finds relations to existing memories. When a fact updates an existing one, the old memory is marked superseded.

Search

supermemory search "project deadline" --top-k 10
supermemory search "project deadline" --all          # include superseded
supermemory search "project deadline" --as-of 2026-03-01  # time travel

Entity operations

supermemory stats                # counts, categories
supermemory history Sarah        # version timeline
supermemory profile Sarah        # auto-built entity profile

API

GET  /api/health                 # status + memory count
POST /api/search                 # {"query": "...", "top_k": 10}
POST /api/ingest                 # {"text": "...", "session_id": "..."}
GET  /api/entities               # all known entities
GET  /api/entity/{name}          # entity memories + profile
POST /api/search_entities        # entity-aware cross-session search
POST /api/aggregate              # count/sum queries over event clusters

Search latency: ~32ms warm, ~8s cold start (embedding model load).

Agent integration

Recall at session start

Inject relevant context before the agent processes a message:

supermemory search "current projects and priorities" --top-k 5

Auto-ingest from responses

After meaningful agent turns, extract and store facts:

supermemory ingest "$RESPONSE_TEXT" --session $SESSION --agent $AGENT_ID

OpenClaw plugin (zero-config)

Install the supermemory-claw plugin for automatic memory injection and extraction with no agent code changes.

Architecture

  • Storage: SQLite with WAL mode (concurrent reads, single writer)
  • Embeddings: Local sentence-transformers (free, on-device) or API (OpenAI/Cohere/Voyage via litellm)
  • Extraction: LLM-based atomic fact extraction with relation detection (default: Haiku)
  • Entity system: Join tables, aliases, auto-merged profiles across sources
  • Multi-agent: Single DB with agent_id tagging, cross-agent semantic search

Cost

~$0.01-0.02 per ingest (3 LLM calls: extract, relate, profile). Search is free (local embeddings).

Links

Comments

Loading comments...