Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

evrmem

v0.1.0

Local Chinese semantic memory search and storage using text2vec embeddings and ChromaDB, supporting RAG-based context augmentation for AI agents.

0· 84·0 current·0 all-time
byThatsD@zhzgao

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zhzgao/evrmem.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "evrmem" (zhzgao/evrmem) from ClawHub.
Skill page: https://clawhub.ai/zhzgao/evrmem
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install evrmem

ClawHub CLI

Package manager switcher

npx clawhub@latest install evrmem
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (local Chinese semantic memory using text2vec + ChromaDB) matches the instructions: installing a Python package, initializing a local DB, configuring model and data directories, and performing searches/RAG.
!
Instruction Scope
Runtime instructions tell the agent to run pip install evrmem, run an evrmem init that downloads a ~400MB model, create ~/.evrmem config/data, and optionally set HF_ENDPOINT to a mirror. These steps are within the tool's purpose but permit arbitrary network downloads, writing to the user's home directory, and replacing system Python packages (e.g., forcing a numpy reinstall).
!
Install Mechanism
There is no formal install spec in the registry; SKILL.md instructs pip installing a third-party package and downloading a large model. Pip installs and model downloads are a moderate supply-chain risk. The suggested mirror domain (https://hf-mirror.com) is not a known official host and could be used to serve malicious or poisoned model binaries if used.
Credentials
The skill does not request secrets or credentials. SKILL.md documents environment variables for configuration (model name, device, data dir, HF_ENDPOINT, disable-network flag). These are reasonable for the function, but HF_ENDPOINT and EVREM_LOCAL_FILES_ONLY materially affect network behavior and trust boundaries.
Persistence & Privilege
always is false and autonomous invocation is allowed (normal). The skill will create and persist files under ~/.evrmem and download models to disk. This is expected for a local memory system but means the skill will store user data locally and consume significant disk/network resources.
What to consider before installing
This skill appears to do what it says (local Chinese vector memory), but before installing you should: 1) Inspect the 'evrmem' package source (or its PyPI project) before pip installing; 2) Install in an isolated virtualenv or container to avoid changing system packages (the instructions may force-reinstall numpy); 3) Prefer official HuggingFace endpoints; avoid using unknown HF mirrors unless you trust them—mirrors can serve malicious/poisoned models; 4) Be aware it will download ~400MB models and write data under ~/.evrmem (may contain sensitive text you store); 5) If you need higher assurance, ask the publisher for a homepage/source repo or request a signed release; without that the activity is coherent but carries supply-chain and environment-change risks.

Like a lobster shell, security has layers — review code before you run it.

latestvk972aph4bwhrecrxq7gtb6tr4183zhaa
84downloads
0stars
1versions
Updated 4w ago
v0.1.0
MIT-0

evrmem Skill

Name

evrmem

Description

Local Chinese Vector Memory System. Provides semantic memory search and storage for AI agents using local Chinese embedding models (text2vec) and ChromaDB. Supports RAG-based context augmentation.

When to Use

Use this skill when the user asks to:

  • "Search memories" or "Find related memories"
  • "Save this to memory"
  • "Remember this information"
  • "Search my knowledge base"
  • "Find past notes about X"
  • "Add this to my memory"
  • "What do I know about X"
  • "RAG retrieval" or "context augmentation"
  • Query or recall previous learnings

Prerequisites

Install evrmem and initialize:

pip install evrmem
evrmem init

For China users (mirror):

set HF_ENDPOINT=https://hf-mirror.com   # Windows
# or
export HF_ENDPOINT=https://hf-mirror.com   # Linux/Mac
evrmem init

Core Workflow

1. Semantic Search (Most Common)

from qmd.core.vector_db import vector_db

results = vector_db.search("React form warning", top_k=5)
for r in results:
    print(f"[{r['distance']:.3f}] {r['content'][:80]}")

Or via CLI:

evrmem search "React form warning"
evrmem search "deployment issue" --project myproject

2. Add Memory

memory_id = vector_db.add_memory(
    "React StrictMode causes Form.useForm warning",
    metadata={"project": "mes-demo", "tags": "react,antd"}
)

Or via CLI:

evrmem add "Important finding about X" --project myproject --tags react,bug

3. Structured Query

# Query by project
evrmem query --project mes-demo

# Query by tag
evrmem query --tag react

# List all projects
evrmem query --list-projects

# List all tags
evrmem query --list-tags

4. RAG Retrieval

result = vector_db.rag("how to fix the form warning", top_k=3)
print(result["context"])

Or via CLI:

evrmem rag "how to fix the form warning"
evrmem rag "how to fix the form warning" --prompt

5. Statistics

evrmem stats

Configuration

Create ~/.evrmem/config.yaml:

vector_db:
  persist_directory: "~/.evrmem/data/qmd_memory"

embedding:
  model_name: "shibing624/text2vec-base-chinese"
  device: "cpu"  # or "cuda"
  cache_folder: "~/.evrmem/models"

rag:
  top_k: 5
  min_similarity: 0.5

logging:
  level: "WARNING"

Environment Variables

VariableDescriptionDefault
EVREM_DATA_DIRData directory~/.evrmem/data/qmd_memory
EVREM_MODEL_NAMEHuggingFace model nameshibing624/text2vec-base-chinese
EVREM_LOCAL_MODELLocal model path (highest priority)-
EVREM_DEVICEDevice for inferencecpu
EVREM_TOP_KDefault retrieval count5
EVREM_MIN_SIMMinimum similarity threshold0.5
EVREM_LOG_LEVELLogging levelWARNING
EVREM_LOCAL_FILES_ONLYDisable network accessfalse
HF_ENDPOINTHuggingFace mirror endpoint-

Response Format

When reporting search results, use this format:

## evrmem Search Results

**Query:** "user query"
**Results:** N memories found

| Score | Project | Content |
|-------|---------|---------|
| 0.723 | mes-demo | React StrictMode causes Form.useForm warning... |
| 0.681 | docs | Deployment script timeout issue... |

### Top Match
**Project:** mes-demo | **Tags:** react,antd

> React StrictMode causes Form.useForm warning...

When adding memory:

## Memory Saved

**ID:** abc123
**Project:** mes-demo
**Tags:** react
**Content:** React StrictMode causes Form.useForm warning...

Use `evrmem search "React StrictMode"` to retrieve later.

Installation for Agent

If evrmem is not installed:

import subprocess
subprocess.run(["pip", "install", "evrmem"], check=True)
# Initialize on first use (downloads ~400MB model)
subprocess.run(["evrmem", "init"], check=True)

For China users, set mirror before init:

import os
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
subprocess.run(["evrmem", "init"], check=True)

Edge Cases

  • Model download fails: Set HF_ENDPOINT=https://hf-mirror.com before evrmem init
  • NumPy errors: Run pip install "numpy<2" --force-reinstall
  • Offline/air-gapped: Download model on connected machine, copy ~/.evrmem/models to offline machine, set EVREM_LOCAL_FILES_ONLY=true
  • Empty search results: Try broader terms or check if memories exist with evrmem query --list-projects
  • Similarity too low: Adjust --top-k or lower EVREM_MIN_SIM threshold
  • Slow search: Use CPU by default; set EVREM_DEVICE=cuda if GPU available

Comments

Loading comments...