Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Memory

One-click clone of a production-grade semantic memory system: HOT/WARM/COLD tiered storage + Qdrant vector DB + Neo4j graph DB + qwen3-embedding. Enables cro...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 14 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md, README, and scripts implement a Qdrant + Neo4j + local Ollama embedding memory system, which matches the skill description. However the registry metadata declared no required binaries/env; SKILL.md clearly requires Docker and Ollama. That metadata mismatch is an incoherence (the skill will actually need Docker & Ollama to work).
!
Instruction Scope
The setup script will: check Docker, attempt to install Ollama via brew, pull an Ollama model, write a docker-compose file into ~/.openclaw/workspace/.lib, run docker compose up to start Qdrant and Neo4j containers, create DB collections/constraints, create HOT/WARM/COLD directories, and copy a Python client into the user's workspace. Those actions modify local system state and start network services bound to host ports. The Neo4j container is started with NEO4J_AUTH=none (no authentication). All network calls in the client use localhost only, but exposing these services without auth on host ports is a security decision you should evaluate.
!
Install Mechanism
There is no formal install spec in the registry, but the included setup script pulls Docker images (qdrant/qdrant:latest and neo4j:5-community) and runs 'ollama pull' (model download). Using 'latest' for qdrant is fragile and can introduce supply-chain risk. These are downloads from public registries (Docker Hub, Ollama); that is expected for this functionality but still higher-risk than an instruction-only skill because binaries/images and model weights are fetched and executed on your host.
Credentials
The skill requests no secrets or external API credentials and the runtime only contacts localhost endpoints (Ollama, Qdrant, Neo4j). That's proportionate. However, the registry metadata omitted required runtime binaries (docker, ollama) referenced in SKILL.md and the script—another metadata mismatch that could mislead users about prerequisites.
Persistence & Privilege
The skill is not force-included (always:false) and does not request platform-level privileges. It writes files into the user's OpenClaw workspace (~/.openclaw/workspace/.lib), creates Docker volumes and containers, and leaves services running on host ports. Those are normal for a local infrastructure installer but they do create persistent local services that you must manage.
What to consider before installing
This skill largely does what it claims, but review these points before installing: - The registry metadata does not list Docker or Ollama even though SKILL.md and the setup script require them. Don't assume the environment already meets prerequisites. - The setup script will automatically pull and run Docker images and download an Ollama model (large downloads). It uses qdrant/qdrant:latest (mutable); consider changing to a pinned image tag if you install. - Neo4j is started with NEO4J_AUTH=none (no password) and both Qdrant and Neo4j are published on host ports (6333/6334 and 7474/7687). On a machine with open network exposure this could be accessible to others—run on an isolated host, VM, or ensure firewall rules block external access. - The script will attempt to run 'brew install ollama' automatically if ollama is missing. Expect system-level changes if you allow it. - If you decide to proceed: inspect scripts/setup.py and the written docker-compose file, run the setup in a controlled environment (local VM or container host), pin image/model versions, and remove/stop the containers and volumes when you no longer need the service. If you want, I can: (1) show the exact lines to change to pin qdrant and disable NEO4J_AUTH=none, or (2) produce a safe checklist and commands to run the setup in an isolated Docker network or VM.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk971tr6qfk068a5k9cp5fq1a09830387

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Deep Memory Skill 🧠

A production-grade semantic memory system for AI agents. Combines tiered file storage with vector search and graph relationships.

Architecture

┌─────────────────────────────────────┐
│        File Layer (always-on)        │
│  HOT / WARM / COLD Markdown files   │
│  semantic_memory.json               │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│        Vector Layer (Docker)         │
│  Qdrant: semantic similarity search │
│  Collection: semantic_memories       │
│  Dimensions: 4096 (qwen3-embedding)  │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│        Graph Layer (Docker)          │
│  Neo4j: entity relationship memory  │
│  Constraints: Memory.key + Entity.id │
└─────────────────────────────────────┘
               ↓
┌─────────────────────────────────────┐
│     Embedding Model (Ollama)         │
│  qwen3-embedding:8b (4096 dims)      │
│  Local, free, no API calls          │
└─────────────────────────────────────┘

Prerequisites

  • Docker Desktop (running)
  • Ollama installed (brew install ollama on macOS)

Usage

Setup (first time)

python3 ~/.openclaw/workspace/skills/deep-memory/scripts/setup.py

Write a memory

from deep_memory import MemorySystem
mem = MemorySystem()
mem.store("user_sir", "Sir prefers direct communication, no pleasantries", tags=["preference", "communication"])

Search memories

results = mem.search("how does Sir like to communicate?", top_k=5)
for r in results:
    print(r['content'], r['score'])

Joint query (vector + graph)

results = mem.joint_query("investment strategy", entity="Sir", top_k=3)

Setup Flow

When triggered, the setup script will:

  1. Check Docker is running
  2. Check Ollama is installed and pull qwen3-embedding:8b if needed
  3. Start Qdrant container (port 6333/6334)
  4. Start Neo4j container (port 7474/7687)
  5. Create Qdrant collection (semantic_memories, 4096 dims, Cosine)
  6. Create Neo4j constraints (Memory.key, Entity.id)
  7. Create HOT/WARM/COLD directory structure
  8. Copy Python toolkit to workspace
  9. Run end-to-end verification test

Agent Integration

In your SOUL.md or AGENTS.md, add:

## Memory Retrieval
Before answering questions about prior work, decisions, or preferences:
1. Run: python3 ~/.openclaw/workspace/.lib/qdrant_memory.py search "<query>"
2. Combine with memory_search tool results
3. Use top results as context

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…