Aister vector-memory
PassAudited by ClawScan on May 1, 2026.
Overview
The skill appears to perform its stated local vector-memory search function, but users should notice that it stores memory contents in PostgreSQL, uses a database password, downloads ML dependencies/model files, and can optionally run a background service.
This skill is reasonable for local semantic memory search if you are comfortable maintaining a local PostgreSQL database and embedding service. Before installing, review the Python scripts, use a dedicated low-privilege database user, keep the embedding endpoint local or trusted, avoid indexing secrets, and prefer the documented Docker/isolation setup.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Personal or identity-related memory content can become searchable and may be returned to the agent as context in later tasks.
The reindexer reads named memory/identity/user files and stores their raw content plus embeddings in PostgreSQL.
MEMORY_FILES = ["MEMORY.md", "IDENTITY.md", "USER.md"] ... INSERT INTO memories (content, embedding, metadata, source)
Index only memory files you intend to persist, keep the database local and access-controlled, and avoid storing secrets or highly sensitive private data in those files.
If the embedding service URL is changed to an untrusted or remote endpoint, memory contents could be exposed to that service.
Memory chunks are sent over HTTP to the configured embedding service; the default is localhost, but the URL is configurable.
requests.post(f"{EMBEDDING_SERVICE_URL}/embed", json={"texts": texts, "prefix": "passage: "}, timeout=120)Keep EMBEDDING_SERVICE_URL pointed at a trusted local service unless you explicitly intend to send memory text elsewhere; use isolation and network controls for any remote service.
A misconfigured database user or reused password could expose or modify the vector-memory database, and setup commands require elevated local privileges.
The skill needs database credentials and privileged setup for PostgreSQL/pgvector, even though this is disclosed as part of installation.
`VECTOR_MEMORY_DB_PASSWORD` — PostgreSQL password for database access ... Installation requires: Root/sudo ... PostgreSQL superuser
Use the recommended dedicated PostgreSQL user with minimal grants, a unique password, chmod 600 on the env file, and preferably the Docker/container setup.
Dependency or model changes upstream could affect what code/data is installed in the environment.
Installation uses unpinned Python packages and downloads a large model from HuggingFace, which is expected for this ML search skill but still a supply-chain consideration.
pip install flask psycopg2-binary sentence-transformers numpy requests ... First run will download e5-large-v2 model (~1.3GB) from HuggingFace
Install in a virtual environment or container, pin package/model versions where possible, and review downloaded dependencies if operating in a sensitive environment.
If enabled, the embedding service can keep starting on login and continue consuming local resources after the initial setup.
The install guide offers an optional autostart snippet that appends a background embedding service launcher to the user's shell profile.
echo '... if ! pgrep -f "embedding_service.py" > /dev/null; then ... nohup ... embedding_service.py ... &' >> ~/.bashrc
Only add the autostart block if you want persistent service behavior; otherwise start and stop the service manually and remove the shell-profile block if no longer needed.
