Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenClaw Advanced Memory

v1.0.0

Provides persistent, searchable AI agent memory with real-time capture, vector search, and nightly LLM curation for long-term recall on local hardware.

1· 341·0 current·0 all-time
byJosh@jtil4201
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description claim persistent, searchable agent memory. Required components (Redis, Qdrant, Ollama, Python libs) and the included scripts implement exactly that. No unrelated credentials, cloud APIs, or extraneous binaries are requested.
Instruction Scope
SKILL.md and scripts instruct the agent to read OpenClaw transcript files (~/.openclaw/...), buffer them to Redis, index into Qdrant, and run nightly local LLM curation. This is within the stated purpose, but it does mean the skill will read and persist all captured transcripts (including names, decisions, PII) into local DBs — a privacy consideration the user should be aware of.
Install Mechanism
No remote arbitrary-code download URLs in the registry entry, but the package includes an installer script (scripts/install.sh) that: pip-installs dependencies (network fetch), creates Qdrant collections, writes/starts a user systemd unit, and installs cron jobs. These actions are expected for persistent memory infrastructure but are persistent changes to the user environment and require review before running.
Credentials
No environment variables or credentials are requested. All services target localhost. The skill assumes local, unauthenticated Qdrant/Redis/Ollama instances — reasonable for a local-only design but you should ensure those services are secured on multi-user or networked hosts.
Persistence & Privilege
Installer enables a user-level systemd service and adds cron jobs (mem-warm, mem-curate). This is expected for a long-running capture/curation system but does give the skill persistent presence in the user's account — review the systemd unit and crontab changes before installing.
Assessment
This skill appears coherent and implements a local three-tier memory pipeline. Before installing: (1) review scripts/install.sh, mem-capture.service (user systemd unit), and the cron entries to confirm you accept those persistent changes; (2) confirm Ollama, Qdrant, and Redis will run locally and are not exposed to untrusted networks (the scripts assume unauthenticated localhost services); (3) be aware mem-capture will read your OpenClaw transcripts (~/.openclaw/...), and mem-curate will extract and permanently store 'gems' (including names and decisions) in Qdrant — if this data is sensitive, adjust filters or run in an isolated environment; (4) consider running the scripts manually (not via install.sh) in a sandboxed account to observe behavior first; and (5) if you want stricter security, add auth/network restrictions to Qdrant/Redis or modify scripts to redact PII before storage.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dtg8egef0ghsnj8nmpgj485827yyememoryvk97dtg8egef0ghsnj8nmpgj485827yyeollamavk97dtg8egef0ghsnj8nmpgj485827yyeqdrantvk97dtg8egef0ghsnj8nmpgj485827yyeredisvk97dtg8egef0ghsnj8nmpgj485827yyevector-searchvk97dtg8egef0ghsnj8nmpgj485827yye
341downloads
1stars
1versions
Updated 7h ago
v1.0.0
MIT-0

OpenClaw Advanced Memory

Three-tier AI agent memory system — real-time capture, vector search, and LLM-curated long-term recall.

What It Does

Gives your OpenClaw agent persistent, searchable memory that survives across sessions:

  • HOT tier — Redis buffer captures conversation turns in real-time (every 30s)
  • WARM tier — Qdrant vector store with chunked, embedded conversations (searchable, 7-day retention)
  • COLD tier — LLM-curated "gems" extracted nightly (decisions, lessons, milestones — stored forever)

Requirements

  • Qdrant — vector database (Docker recommended)
  • Redis — buffer queue (Docker recommended)
  • Ollama — local embeddings (snowflake-arctic-embed2) + curation LLM (qwen2.5:7b)
  • Python 3.10+ with qdrant-client, redis, requests

No cloud APIs. No subscriptions. Runs entirely on your own hardware.

Setup

# 1. Start Qdrant + Redis (Docker)
docker compose up -d

# 2. Pull Ollama models
ollama pull snowflake-arctic-embed2
ollama pull qwen2.5:7b

# 3. Run the installer
bash scripts/install.sh

The installer sets up Qdrant collections, installs a systemd capture service, and configures cron jobs.

Edit connection hosts at the top of each script if your infra isn't on localhost.

Usage

# Search your memory
./recall "what did we decide about pricing"
./recall "deployment" --project myproject --tier cold -v

# Check system status
./mem-status

# Force a warm flush or curation run
./warm-now
./curate-now 2026-03-01

Schedules

ComponentScheduleWhat It Does
mem-captureAlways running (systemd)Watches transcripts → Redis
mem-warmEvery 30 min (cron)Redis → Qdrant warm
mem-curateNightly 2 AM (cron)Warm → LLM curation → Qdrant cold

How Curation Works

Every night, a local LLM (qwen2.5:7b via Ollama) reads the day's conversations and extracts structured gems:

{
  "gem": "Chose DistilBERT over TinyBERT — 99.69% F1, zero false positives",
  "context": "A/B tested both architectures on red team suite",
  "categories": ["decision", "technical"],
  "project": "guardian",
  "importance": "high"
}

Only decisions, milestones, lessons, and people info make the cut. Casual banter and debugging noise get filtered out.

Links

Comments

Loading comments...