Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Engram Memory

v2.1.0

Persistent semantic memory for AI agents. Store, search, recall, and forget memories across sessions using Qdrant + FastEmbed.

0· 166·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for escapethefate1991/engrammemory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Engram Memory" (escapethefate1991/engrammemory) from ClawHub.
Skill page: https://clawhub.ai/escapethefate1991/engrammemory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install engrammemory

ClawHub CLI

Package manager switcher

npx clawhub@latest install engrammemory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description align with the repository contents: code, scripts, and docs implement a Qdrant + FastEmbed local memory system and an OpenClaw plugin that auto-recalls and auto-captures memories. Minor inconsistency: registry metadata lists this as instruction-only (no install spec) but the package includes many executables, Docker manifests, and server scripts — i.e., it's not purely instruction-only.
Instruction Scope
SKILL.md instructs the agent/operator to run scripts/setup.sh and docker-compose to deploy Qdrant and a FastEmbed service, to add the plugin to openclaw.json, and to enable autoRecall/autoCapture. Those instructions legitimately relate to the memory purpose, but the plugin's lifecycle hooks (before_agent_start/after_agent_response) mean the skill will automatically read conversation content and inject stored memories into agent context (privacy-sensitive behavior). The docs claim context queries are scoped to .context/, but the context tools also try to discover project roots and will read or index files under project directories when initialized.
!
Install Mechanism
There is no formal install spec in the registry, yet SKILL.md and scripts run a setup that uses docker-compose and will pull Docker images (notably engrammemory/fastembed:1.0.0). Pulling and running third‑party container images from an unverified/unknown publisher is higher risk because those images could run arbitrary code. The repo includes a Dockerfile for fastembed (suggesting you can rebuild), but the default docker-compose references published images.
Credentials
The skill does not request environment variables or cloud credentials in registry metadata. Runtime configuration is local URLs (qdrantUrl, embeddingUrl) and optional model names. This matches the stated local/self‑hosted design. However several scripts and services read config files and may respect environment variables if present (e.g., EMBEDDING_URL / MODEL_NAME) — nothing appears to require unrelated cloud credentials.
Persistence & Privilege
always:false (good). The plugin is designed to integrate into the agent lifecycle and, when enabled, will automatically recall memories before responses and capture conversation content after responses. That autonomous storage behavior is expected for a memory plugin but is privacy‑sensitive; combined with the fact that setup pulls and runs code (containers), it increases the potential blast radius if you haven't audited the code or images.
Scan Findings in Context
[system-prompt-override] unexpected: A prompt-injection pattern was flagged in the SKILL.md pre-scan. The visible SKILL.md explains auto-injection of memories into agent context (which can modify the agent's input), so the detection may be a heuristic match — still worth manual review to ensure the skill does not include instructions to override system prompts or subvert agent controls.
What to consider before installing
What to check before installing or running this skill: - Do not run setup scripts or docker-compose in a sensitive production environment until you audit them. The provided setup pulls Docker images (engrammemory/fastembed) from an external registry; pull/rebuild images locally from the included Dockerfile if you want to avoid running untrusted images. - Review scripts/setup.sh and docker-compose.yml to see exactly what containers and network ports will be created. Prefer running in an isolated VM or disposable machine the first time. - Inspect docker/fastembed/Dockerfile and the fastembed service code for any unexpected network calls or telemetry. The README claims 'no phone‑home', but confirm by grepping the repo for outbound HTTP requests and external domains (e.g., engrammemory.ai) before trusting it. - If you are concerned about privacy, disable autoCapture and autoRecall in the plugin config (set autoCapture=false and autoRecall=false) or limit them until you’re comfortable with behavior; check where conversation content is stored (Qdrant collection) and how retention/forgetting works. - Audit mcp/server.py and any server scripts for open network bindings; if you expose an MCP server, bind to localhost or restrict access via firewall. - If you lack the expertise to audit containers and Python/JS code, run the stack in an isolated environment (VM) and observe network egress (e.g., with a network monitor) to ensure nothing phones home. Taken together: the functionality matches the stated purpose, but because the skill will pull and run third‑party containers and can automatically persist conversation content into a local store, treat it as potentially risky until you've inspected or rebuilt the images and confirmed no unexpected external communications.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fcxgz58brh947r3v3dy0cg183wscq
166downloads
0stars
11versions
Updated 4w ago
v2.1.0
MIT-0

Engram for OpenClaw Agents

Persistent semantic memory that makes your agent remember across sessions.

What This Provides

Instead of starting fresh every session, your agent will:

  • Remember your preferences, facts, and past decisions
  • Automatically recall relevant context for new conversations
  • Search through stored memories semantically
  • Categorize and organize knowledge by type

Quick Start

# Install the skill
clawhub install engrammemory

# Setup (requires Docker — deploys Qdrant + FastEmbed)
bash scripts/setup.sh

# Store a memory
memory_store "I prefer direct communication style" --category preference

# Search memories
memory_search "communication preferences"

Core Functions

memory_store(text, category, importance)

Save information to long-term memory with semantic embedding.

# Save preferences
memory_store("User prefers TypeScript over JavaScript for new projects", 
            category="preference", importance=0.8)

# Save facts  
memory_store("Database migration completed on 2024-03-15, moved from SQLite to PostgreSQL",
            category="fact", importance=0.7)

# Save decisions
memory_store("Decided to use React Query for state management in the frontend",
            category="decision", importance=0.9)

memory_recall(query, limit, category)

Search stored memories using semantic similarity.

# Find relevant memories
memory_recall("database migration")
memory_recall("frontend preferences", category="preference")
memory_recall("recent decisions", limit=10)

memory_profile(action, key, value, category)

Manage user profile data (static preferences + dynamic context).

# View profile
memory_profile("view")

# Add static preference  
memory_profile("add", "communication_style", "Direct, no fluff", "static")

# Add dynamic context
memory_profile("add", "current_project", "Building memory system", "dynamic")

memory_forget(query, memory_id)

Remove memories by search or specific ID.

memory_forget("old project requirements")
memory_forget(memory_id="uuid-string")  

Memory Categories

  • preference — User preferences, communication style, technical choices
  • fact — Objective information, system states, completed work
  • decision — Important decisions made, rationale, outcomes
  • entity — People, projects, organizations, relationships
  • other — Miscellaneous information that doesn't fit above

Context System

Engram includes a context management system that gives your agent structured knowledge about any codebase. Initialize a project, and your agent can search and query its architecture, patterns, and APIs.

Context Commands

engram-context — Core Management

# Initialize context for a project
engram-context init /path/to/project --template web-app
engram-context init /path/to/project --template python-api
engram-context init /path/to/project --template generic

# Build search index
engram-context index

# Search context files
engram-context find "authentication patterns"

# Check status
engram-context status

engram-ask — Natural Language Queries

engram-ask "How does authentication work?"
engram-ask "Where are the API endpoints defined?"
engram-ask interactive

engram-semantic — Embedding-Based Search

engram-semantic find "user login process"
engram-semantic index
engram-semantic status

Project Templates

TemplateBest for
web-appFull-stack web apps (React/Vue + Node/Python + DB)
python-apiPython API servers (FastAPI, Django)
genericAny project type

Context Structure

Each project gets a .context/ directory:

.context/
├── metadata.yaml       # Project configuration
├── architecture.md     # System architecture
├── patterns.md         # Code patterns and standards
├── apis.md             # API documentation
├── development.md      # Development workflows
├── troubleshooting.md  # Common issues and solutions
└── index.db            # Search index (auto-generated)

All context queries are scoped to the current project.

Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   OpenClaw      │    │  FastEmbed      │    │     Qdrant      │
│   Agent         │───▶│  nomic-embed    │───▶│  Vector Store   │
│                 │    │  text-v1.5      │    │  Port 6333      │
└─────────────────┘    └─────────────────┘    └─────────────────┘

Components:

  • Qdrant — Vector database for semantic storage/search
  • FastEmbed — Local embedding model (nomic-embed-text-v1.5)
  • Plugin — OpenClaw integration with auto-recall and capture

Installation

Prerequisites

  • OpenClaw 2026.3.13+
  • Docker and docker-compose
  • 4GB+ RAM for embedding model
  • 10GB+ storage for vector database

Automated Setup

# 1. Install the skill
clawhub install engrammemory

# 2. Run setup script  
cd ~/.openclaw/workspace/skills/engrammemory
bash scripts/setup.sh

# 3. Follow the configuration prompts
# (The script will generate the exact config to add to openclaw.json)

# 4. Restart OpenClaw gateway
openclaw gateway restart

Manual Setup

If you prefer manual installation or need custom configuration:

  1. Deploy Qdrant + FastEmbed:

    # Copy docker-compose template
    cp config/docker-compose.yml ~/engram-stack/
    cd ~/engram-stack
    docker-compose up -d
    
  2. Configure OpenClaw plugin: Add to ~/.openclaw/openclaw.json:

    {
      "plugins": {
        "allow": ["engram"],
        "slots": {
          "memory": "engram"
        },
        "entries": {
          "engram": {
            "enabled": true,
            "config": {
              "qdrantUrl": "http://localhost:6333",
              "embeddingModel": "nomic-ai/nomic-embed-text-v1.5",
              "collection": "agent-memory", 
              "autoRecall": true,
              "autoCapture": true,
              "maxRecallResults": 5,
              "minRecallScore": 0.35,
              "embeddingUrl": "http://localhost:11435"
            }
          }
        }
      }
    }
    
  3. Restart gateway:

    openclaw gateway restart
    

Configuration Options

OptionDefaultDescription
qdrantUrlhttp://localhost:6333Qdrant vector database URL
embeddingUrlhttp://localhost:11435FastEmbed API endpoint
embeddingModelnomic-ai/nomic-embed-text-v1.5Embedding model
collectionagent-memoryMemory collection name
autoRecalltrueAuto-inject relevant memories
autoCapturetrueAuto-save important context
maxRecallResults5Max memories per auto-recall
minRecallScore0.35Minimum similarity threshold
profileFrequency20Update profile every N messages
debugfalseEnable debug logging

Example Configuration

{
  "qdrantUrl": "http://localhost:6333",
  "embeddingUrl": "http://localhost:11435",
  "collection": "agent-memory",
  "autoRecall": true,
  "autoCapture": true
}

Multi-Agent Setup

For multiple agents sharing memory:

{
  "agents": {
    "list": [
      {
        "id": "main",
        "plugins": {
          "memory": {
            "collection": "main-agent-memory"
          }
        }
      },
      {
        "id": "coding-assistant", 
        "plugins": {
          "memory": {
            "collection": "coding-agent-memory"
          }
        }
      }
    ]
  }
}

Usage Examples

Research Assistant

# Save research findings
memory_store("Found that React 18 concurrent rendering improves performance by 15-30% for large lists", 
            category="fact", importance=0.8)

# Later, when discussing performance:
memories = memory_recall("React performance improvements")
# Auto-recalls the research finding

Project Manager

# Save project decisions
memory_store("Team decided to use PostgreSQL over MongoDB for better ACID compliance in financial app",
            category="decision", importance=0.9)

# Track preferences
memory_profile("add", "deployment_preference", "Docker with Kubernetes", "static")

# Later project planning auto-recalls relevant decisions and preferences

Customer Support

# Remember customer preferences
memory_store("Customer prefers email over phone calls for non-urgent issues",
            category="preference", importance=0.7)

# Track issue patterns
memory_store("Billing module API timeout issue affects 15% of enterprise customers",
            category="fact", importance=0.8)

Backup and Migration

Export Memory

# Export all memories as JSON
curl "http://localhost:6333/collections/agent-memory/points/scroll" \
  -H "Content-Type: application/json" \
  -d '{"limit": 10000}' > memory_backup.json

Import Memory

# Import memories from backup
# Re-import using memory_store
python scripts/memory_store.py --json '{"text": "...", "category": "fact"}'

Performance Tuning

Vector Quantization (New Feature)

Engram now includes automatic scalar quantization for 4x memory reduction with no recall loss:

  • Memory Usage: Reduces vector storage by ~75% (32-bit → 8-bit per dimension)
  • Search Speed: Unchanged (quantized vectors stay in RAM for fast search)
  • Quality: No degradation (99th percentile quantile preserves accuracy)
  • Automatic: Enabled by default in new installations
# Quantization is automatically applied when you run:
bash scripts/setup.sh

Technical Details:

  • Uses int8 scalar quantization with 99th percentile
  • Compresses 768-dimension vectors from ~3KB to ~768 bytes each
  • Enables storing 4x more memories in the same RAM
  • Fully compatible with existing memory collections

For Large Memory Sets (10K+ memories)

{
  "config": {
    "maxRecallResults": 3,
    "minRecallScore": 0.45,
    "autoCapture": false
  }
}

For High-Frequency Agents

{
  "config": {
    "profileFrequency": 50,
    "autoRecall": false
  }
}

Troubleshooting

Common Issues

Memory not persisting:

  • Check Qdrant is running: curl http://localhost:6333/collections
  • Verify plugin config in openclaw status

Poor recall quality:

  • Lower minRecallScore (try 0.25)
  • Check embedding model is loaded: curl http://localhost:11435/models

High memory usage:

  • Increase Docker memory limits
  • Reduce maxRecallResults
  • Enable auto-cleanup in config

Debug Mode

{
  "config": {
    "debug": true
  }
}

Enables detailed logging for memory operations.

Advanced Usage

Custom Categories

memory_store("Customer uses advanced React patterns", category="customer_tech_profile")
memory_recall("customer tech preferences", category="customer_tech_profile")

Importance-Based Filtering

# Only recall highly important memories  
memory_recall("project decisions", min_importance=0.8)

Time-Based Queries

# Recent memories (last 30 days)
memory_recall("recent changes", days_back=30)

Security Considerations

  • Local-first: All data stays on your infrastructure
  • No external APIs: Embeddings generated locally
  • Encryption: Use encrypted storage for sensitive data
  • Access control: Configure Qdrant authentication if needed

Contributing

Found a bug or want to add features?

  • GitHub: engram-memory-community
  • Issues: Report bugs and feature requests
  • Docs: Help improve documentation
  • Examples: Share usage patterns

License

MIT License - Use freely in personal and commercial projects.


Transform your agent from stateless to stateful. Install Engram today.

OpenClaw Integration

🟢 Fully Integrated - Engram is now available as native OpenClaw tools.

Available Tools

  • memory_search - Search memories using semantic similarity
  • memory_store - Store text with embeddings in long-term memory

How to Use

Once Engram is set up (FastEmbed service + Qdrant running), these tools are automatically available in OpenClaw sessions:

// Search stored memories
memory_search("FastEmbed integration status", 10, 0.3)

// Store new memories
memory_store("User prefers detailed explanations", "preference", 0.8)

Implementation

  • Plugin Type: Native Python plugin
  • Backend: FastEmbed (localhost:8000) + Qdrant (localhost:6333)
  • No MCP Server Required: Direct integration through OpenClaw's plugin system

See OPENCLAW_INTEGRATION.md for complete technical details.

Comments

Loading comments...