Aister vector-memory

PassAudited by ClawScan on May 1, 2026.

Overview

The skill appears to perform its stated local vector-memory search function, but users should notice that it stores memory contents in PostgreSQL, uses a database password, downloads ML dependencies/model files, and can optionally run a background service.

This skill is reasonable for local semantic memory search if you are comfortable maintaining a local PostgreSQL database and embedding service. Before installing, review the Python scripts, use a dedicated low-privilege database user, keep the embedding endpoint local or trusted, avoid indexing secrets, and prefer the documented Docker/isolation setup.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Personal or identity-related memory content can become searchable and may be returned to the agent as context in later tasks.

Why it was flagged

The reindexer reads named memory/identity/user files and stores their raw content plus embeddings in PostgreSQL.

Skill content
MEMORY_FILES = ["MEMORY.md", "IDENTITY.md", "USER.md"] ... INSERT INTO memories (content, embedding, metadata, source)
Recommendation

Index only memory files you intend to persist, keep the database local and access-controlled, and avoid storing secrets or highly sensitive private data in those files.

What this means

If the embedding service URL is changed to an untrusted or remote endpoint, memory contents could be exposed to that service.

Why it was flagged

Memory chunks are sent over HTTP to the configured embedding service; the default is localhost, but the URL is configurable.

Skill content
requests.post(f"{EMBEDDING_SERVICE_URL}/embed", json={"texts": texts, "prefix": "passage: "}, timeout=120)
Recommendation

Keep EMBEDDING_SERVICE_URL pointed at a trusted local service unless you explicitly intend to send memory text elsewhere; use isolation and network controls for any remote service.

What this means

A misconfigured database user or reused password could expose or modify the vector-memory database, and setup commands require elevated local privileges.

Why it was flagged

The skill needs database credentials and privileged setup for PostgreSQL/pgvector, even though this is disclosed as part of installation.

Skill content
`VECTOR_MEMORY_DB_PASSWORD` — PostgreSQL password for database access ... Installation requires: Root/sudo ... PostgreSQL superuser
Recommendation

Use the recommended dedicated PostgreSQL user with minimal grants, a unique password, chmod 600 on the env file, and preferably the Docker/container setup.

What this means

Dependency or model changes upstream could affect what code/data is installed in the environment.

Why it was flagged

Installation uses unpinned Python packages and downloads a large model from HuggingFace, which is expected for this ML search skill but still a supply-chain consideration.

Skill content
pip install flask psycopg2-binary sentence-transformers numpy requests ... First run will download e5-large-v2 model (~1.3GB) from HuggingFace
Recommendation

Install in a virtual environment or container, pin package/model versions where possible, and review downloaded dependencies if operating in a sensitive environment.

What this means

If enabled, the embedding service can keep starting on login and continue consuming local resources after the initial setup.

Why it was flagged

The install guide offers an optional autostart snippet that appends a background embedding service launcher to the user's shell profile.

Skill content
echo '... if ! pgrep -f "embedding_service.py" > /dev/null; then ... nohup ... embedding_service.py ... &' >> ~/.bashrc
Recommendation

Only add the autostart block if you want persistent service behavior; otherwise start and stop the service manually and remove the shell-profile block if no longer needed.