Back to skill
Skillv1.0.0
ClawScan security
Deep Memory · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
ReviewMar 16, 2026, 11:25 AM
- Verdict
- Review
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's code and instructions generally implement the advertised memory stack, but there are incoherences and security-relevant choices (missing declared requirements, pulling mutable images, and starting unauthenticated services) that you should review before installing.
- Guidance
- This skill largely does what it claims, but review these points before installing: - The registry metadata does not list Docker or Ollama even though SKILL.md and the setup script require them. Don't assume the environment already meets prerequisites. - The setup script will automatically pull and run Docker images and download an Ollama model (large downloads). It uses qdrant/qdrant:latest (mutable); consider changing to a pinned image tag if you install. - Neo4j is started with NEO4J_AUTH=none (no password) and both Qdrant and Neo4j are published on host ports (6333/6334 and 7474/7687). On a machine with open network exposure this could be accessible to others—run on an isolated host, VM, or ensure firewall rules block external access. - The script will attempt to run 'brew install ollama' automatically if ollama is missing. Expect system-level changes if you allow it. - If you decide to proceed: inspect scripts/setup.py and the written docker-compose file, run the setup in a controlled environment (local VM or container host), pin image/model versions, and remove/stop the containers and volumes when you no longer need the service. If you want, I can: (1) show the exact lines to change to pin qdrant and disable NEO4J_AUTH=none, or (2) produce a safe checklist and commands to run the setup in an isolated Docker network or VM.
Review Dimensions
- Purpose & Capability
- noteThe SKILL.md, README, and scripts implement a Qdrant + Neo4j + local Ollama embedding memory system, which matches the skill description. However the registry metadata declared no required binaries/env; SKILL.md clearly requires Docker and Ollama. That metadata mismatch is an incoherence (the skill will actually need Docker & Ollama to work).
- Instruction Scope
- concernThe setup script will: check Docker, attempt to install Ollama via brew, pull an Ollama model, write a docker-compose file into ~/.openclaw/workspace/.lib, run docker compose up to start Qdrant and Neo4j containers, create DB collections/constraints, create HOT/WARM/COLD directories, and copy a Python client into the user's workspace. Those actions modify local system state and start network services bound to host ports. The Neo4j container is started with NEO4J_AUTH=none (no authentication). All network calls in the client use localhost only, but exposing these services without auth on host ports is a security decision you should evaluate.
- Install Mechanism
- concernThere is no formal install spec in the registry, but the included setup script pulls Docker images (qdrant/qdrant:latest and neo4j:5-community) and runs 'ollama pull' (model download). Using 'latest' for qdrant is fragile and can introduce supply-chain risk. These are downloads from public registries (Docker Hub, Ollama); that is expected for this functionality but still higher-risk than an instruction-only skill because binaries/images and model weights are fetched and executed on your host.
- Credentials
- noteThe skill requests no secrets or external API credentials and the runtime only contacts localhost endpoints (Ollama, Qdrant, Neo4j). That's proportionate. However, the registry metadata omitted required runtime binaries (docker, ollama) referenced in SKILL.md and the script—another metadata mismatch that could mislead users about prerequisites.
- Persistence & Privilege
- okThe skill is not force-included (always:false) and does not request platform-level privileges. It writes files into the user's OpenClaw workspace (~/.openclaw/workspace/.lib), creates Docker volumes and containers, and leaves services running on host ports. Those are normal for a local infrastructure installer but they do create persistent local services that you must manage.
