Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

vector-memory

Smart memory search with automatic vector fallback. Uses semantic embeddings when available, falls back to built-in search otherwise. Zero configuration - works immediately after ClawHub install. No setup required - just install and memory_search works immediately, gets better after optional sync.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 3k · 18 current installs · 18 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Overall the code implements a local vector-memory + fallback keyword search as advertised (smart wrapper, vector implementation, OpenClaw wrapper). The requested capabilities (reading MEMORY.md and memory/ files, writing a local vectors DB, downloading an embedding model and node modules) are consistent with providing local semantic search. Minor inconsistency: documentation advertises optional env vars MEMORY_DIR and MEMORY_FILE, but the vector_memory_local.js and memory.js use hard-coded paths (/config/.openclaw/workspace/...) and do not read those optional env vars, so the documented configuration options are not fully implemented.
Instruction Scope
SKILL.md instructs the agent/user to run provided node scripts (sync, search, status) and to optionally run an install script. Those instructions map to the included code and only reference workspace memory files and the skill's own files. The behavior is within the stated scope (index and search memory). It does, however, instruct/perform indexing of all markdown files under the workspace memory directories and will download an ~80MB model the first time it runs — this is expected for local embedding but is a notable side-effect (network, disk usage).
Install Mechanism
There is no formal registry install spec, but an install.sh is included. The install.sh clones from a GitHub repository URL that contains a placeholder (YOUR_USERNAME) — as-is that URL will fail and requires the installer author to be replaced. The script runs npm install in vector-memory (pulling @xenova/transformers from npm) and runs an initial sync that will download the model. This is not inherently malicious, but the placeholder repo URL and the fact that installation will fetch packages and a model over the network means you should verify the repo/source before running the installer or curl | bash commands.
Credentials
The skill declares no required environment variables or credentials (and none are required to run). The docs mention optional MEMORY_DIR and MEMORY_FILE env vars, but the code ignores them and uses fixed paths. The references/docs mention a pgvector variant that can use an OpenAI key, but that is optional and only relevant for the pgvector guide; the primary local implementation does not require secrets. The skill will read and write files inside the agent's workspace and create a model cache and vectors_local.json there — appropriate for its purpose but worth noting.
Persistence & Privilege
The skill is not configured as always: true and does not request elevated system privileges. It writes files under the agent workspace (vectors_local.json and a .cache/transformers directory) and will create node_modules when npm install runs — this is expected behavior for a local embedding implementation. It does not modify other skills' configs or system-wide settings.
What to consider before installing
This skill mostly does what it says (local vector search + keyword fallback), but several things to check before you install or run it: - Verify the source: install.sh contains a placeholder GitHub URL (YOUR_USERNAME). Do not run curl | bash against that script unless you have verified and replaced the URL with a trusted repository. Prefer installing via your platform's vetted mechanism. - Model & network downloads: first sync will download an ~80MB model and npm will fetch @xenova/transformers from npm. Expect network traffic and ~80MB+ disk usage in the agent workspace. - Paths & docs mismatch: documentation lists optional MEMORY_DIR and MEMORY_FILE env vars, but the implementation uses hard-coded /config/.openclaw/workspace paths (and smart_memory.js checks OPENCLAW_WORKSPACE). If you rely on custom paths, review and adjust the code before running. - Workspace access: the skill reads all markdown files under the workspace (MEMORY.md and memory/*.md) and writes vectors_local.json and a .cache folder. Do not install this if your workspace contains sensitive data you do not want indexed or stored in JSON. - Shell/templating caution: the skill executes child processes via execSync using constructed command strings; the JS uses JSON.stringify for the query which reduces injection risk, but if your agent system substitutes {{query}} directly into shell commands (skill.json command templates) ensure the platform escapes inputs properly to avoid command injection. - Run in a sandbox first: test in a disposable/isolated workspace (no secrets, small sample memory files) to confirm behavior and model download sources before deploying in a production agent. If you want to proceed, inspect/replace the install URL with an official repo, confirm the npm dependency source, and consider editing the code to use configurable env vars (and to avoid indexing any sensitive files).

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk979gg8mv3ggqzjn16dqastb2180j84t

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Vector Memory

Smart memory search that automatically selects the best method:

  • Vector search (semantic, high quality) when synced
  • Built-in search (keyword, fast) as fallback

Zero configuration required. Works immediately after install.

Quick Start

Install from ClawHub

npx clawhub install vector-memory

Done! memory_search now works with automatic method selection.

Optional: Sync for Better Results

node vector-memory/smart_memory.js --sync

After sync, searches use neural embeddings for semantic understanding.

How It Works

Smart Selection

// Same call, automatic best method
memory_search("James principles values") 

// If vector ready: finds "autonomy, competence, creation" (semantic match)
// If not ready: uses keyword search (fallback)

Behavior Flow

  1. Check: Is vector index ready?
  2. Yes: Use semantic search (synonyms, concepts)
  3. No: Use built-in search (keywords)
  4. Vector fails: Automatically fall back

Tools

memory_search

Auto-selects best method

Parameters:

  • query (string): Search query
  • max_results (number): Max results (default: 5)

Returns: Matches with path, lines, score, snippet

memory_get

Get full content from file.

memory_sync

Index memory files for vector search. Run after edits.

memory_status

Check which method is active.

Comparison

FeatureBuilt-inVectorSmart Wrapper
Synonyms✅ (when ready)
SetupBuilt-inRequires sync✅ Zero config
FallbackN/AManual✅ Automatic

Usage

Immediate (no action needed):

node vector-memory/smart_memory.js --search "query"

Better quality (after sync):

# One-time setup
node vector-memory/smart_memory.js --sync

# Now all searches use vector
node vector-memory/smart_memory.js --search "query"

Files

FilePurpose
smart_memory.jsMain entry - auto-selects method
vector_memory_local.jsVector implementation
memory.jsOpenClaw wrapper

Configuration

None required.

Optional environment variables:

export MEMORY_DIR=/path/to/memory
export MEMORY_FILE=/path/to/MEMORY.md

Scaling

  • < 1000 chunks: Built-in + JSON (current)
  • > 1000 chunks: Use pgvector (see references/pgvector.md)

References

Files

14 total
Select a file
Select a file to preview.

Comments

Loading comments…