Openclaw Memory Enhancer

Edge-optimized RAG memory system for OpenClaw with semantic search. Automatically loads memory files, provides intelligent recall, and enhances conversations...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 434 · 2 current installs · 2 all-time installs
byhenry@henryfcb
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The described capabilities (local semantic memory, auto-load from a memory/ folder, edge and standard versions) align with requiring only python3. However, the skill bundle contains no code files — SKILL.md references scripts (memory_enhancer_edge.py, memory_enhancer.py) and Python modules that are not included, so the instructions assume you'll fetch code from the GitHub homepage or ClawHub. This is plausible but worth noting.
!
Instruction Scope
The instructions direct the agent/user to load and automatically read all files from a 'memory/' directory and to store data under ~/.openclaw/workspace/knowledge-base/. That behavior is expected for a memory tool, but it can expose arbitrary local files if the directory is misconfigured or symlinked. The SKILL.md also asserts 'No network requests / No data leaves your device' while simultaneously documenting git clone and pip installs and an optional 'standard' mode that downloads models—this is a contradictory privacy claim that should be clarified.
Install Mechanism
There is no formal install spec in the bundle. The SKILL.md recommends git clone from a GitHub repo and (optionally) pip install for dependencies. Those are standard and traceable install mechanisms, but they will fetch code and packages from the network; verify the GitHub repository contents and maintainers before cloning or running scripts.
Credentials
The skill requests only python3 and declares no environment variables or credentials, which is proportionate to its stated purpose.
Persistence & Privilege
The skill does not request always: true and does not declare any elevated platform privileges. It is user-invocable and can be invoked autonomously by default, which is the normal platform behavior.
What to consider before installing
This skill is instruction-only and does not include the actual Python files it references; installing it will require cloning the GitHub repo or running pip, which downloads code and possibly models. Before installing or running anything: (1) Inspect the GitHub repository (https://github.com/henryfcb/openclaw-memory-enhancer) — review the code, README, and license. (2) Confirm which directory the tool will read (memory/ and ~/.openclaw/workspace/knowledge-base/) and ensure it won't be pointed at sensitive system or personal files. (3) Be aware the 'edge' path claims zero network activity but the documented install steps and the 'standard' mode do require network access — expect downloads if you follow those steps. (4) If you need strong privacy, prefer the edge workflow only after reviewing the edge script to confirm it truly performs no external network calls. (5) If you are not comfortable auditing the repo, avoid cloning/running provided scripts or run them in an isolated environment (VM/container) first.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk979rwewbwsspz6ydkefs7dqxn81mhx9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis
Binspython3

SKILL.md

🧠 OpenClaw Memory Enhancer

Give OpenClaw long-term memory - remember important information across sessions and automatically recall relevant context for conversations.

Core Capabilities

CapabilityDescription
🔍 Semantic SearchVector similarity search, understanding intent not just keywords
📂 Auto LoadAutomatically reads all files from memory/ directory
💡 Smart RecallFinds relevant historical memory during conversations
🔗 Memory GraphBuilds connections between related memories
💾 Local Storage100% local, no cloud, complete privacy
🚀 Edge Optimized<10MB memory, runs on Jetson/Raspberry Pi

Quick Reference

TaskCommand (Edge Version)Command (Standard Version)
Load memoriespython3 memory_enhancer_edge.py --loadpython3 memory_enhancer.py --load
Search--search "query"--search "query"
Add memory--add "content"--add "content"
Export--export--export
Stats--stats--stats

When to Use

Use this skill when:

  • You want OpenClaw to remember things across sessions
  • You need to build a knowledge base from chat history
  • You're working on long-term projects that need context
  • You want automatic FAQ generation from conversations
  • You're running on edge devices with limited memory

Don't use when:

  • Simple note-taking apps are sufficient
  • You don't need cross-session memory
  • You have plenty of memory and want maximum accuracy (use standard version)

Versions

Edge Version ⭐ Recommended

Best for: Jetson, Raspberry Pi, embedded devices

python3 memory_enhancer_edge.py --load

Features:

  • Zero dependencies (Python stdlib only)
  • Memory usage < 10MB
  • Lightweight keyword + vector matching
  • Perfect for resource-constrained devices

Standard Version

Best for: Desktop/server, maximum accuracy

pip install sentence-transformers numpy
python3 memory_enhancer.py --load

Features:

  • Uses sentence-transformers for high-quality embeddings
  • Better semantic understanding
  • Memory usage 50-100MB
  • Requires model download (~50MB)

Installation

Via ClawHub (Recommended)

clawhub install openclaw-memory-enhancer

Via Git

git clone https://github.com/henryfcb/openclaw-memory-enhancer.git \
  ~/.openclaw/skills/openclaw-memory-enhancer

Usage Examples

Command Line

# Load existing OpenClaw memories
cd ~/.openclaw/skills/openclaw-memory-enhancer
python3 memory_enhancer_edge.py --load

# Search for memories
python3 memory_enhancer_edge.py --search "voice-call plugin setup"

# Add a new memory
python3 memory_enhancer_edge.py --add "User prefers dark mode"

# Show statistics
python3 memory_enhancer_edge.py --stats

# Export to Markdown
python3 memory_enhancer_edge.py --export

Python API

from memory_enhancer_edge import MemoryEnhancerEdge

# Initialize
memory = MemoryEnhancerEdge()

# Load existing memories
memory.load_openclaw_memory()

# Search for relevant memories
results = memory.search_memory("AI trends report", top_k=3)
for r in results:
    print(f"[{r['similarity']:.2f}] {r['content'][:100]}...")

# Recall context for a conversation
context = memory.recall_for_prompt("Help me check billing")
# Returns formatted memory context

# Add new memory
memory.add_memory(
    content="User prefers direct results",
    source="chat",
    memory_type="preference"
)

OpenClaw Integration

# In your OpenClaw agent
from skills.openclaw_memory_enhancer.memory_enhancer_edge import MemoryEnhancerEdge

class EnhancedAgent:
    def __init__(self):
        self.memory = MemoryEnhancerEdge()
        self.memory.load_openclaw_memory()
    
    def process(self, user_input: str) -> str:
        # 1. Recall relevant memories
        memory_context = self.memory.recall_for_prompt(user_input)
        
        # 2. Enhance prompt with context
        enhanced_prompt = f"""
{memory_context}

User: {user_input}
"""
        
        # 3. Call LLM with enhanced context
        response = call_llm(enhanced_prompt)
        
        return response

Memory Types

TypeDescriptionExample
daily_logDaily memory filesmemory/2026-02-22.md
capabilityCapability recordsSkills, tools
core_memoryCore conventionsImportant rules
qaQuestion & AnswerQ: How to... A: You should...
instructionDirect instructions"Remember: always do X"
solutionTechnical solutionsStep-by-step guides
preferenceUser preferences"User likes dark mode"

How It Works

Memory Encoding (Edge Version)

  1. Keyword Extraction: Extract important words from text
  2. Hash Vector: Map keywords to vector positions
  3. Normalization: L2 normalize the vector
  4. Storage: Save to local JSON file

Memory Retrieval

  1. Query Encoding: Convert query to same vector format
  2. Keyword Pre-filter: Fast filter by common keywords
  3. Similarity Calculation: Cosine similarity between vectors
  4. Ranking: Return top-k most similar memories

Privacy Protection

  • All data stored locally in ~/.openclaw/workspace/knowledge-base/
  • No network requests
  • No external API calls
  • No data leaves your device

Technical Specifications

Edge Version

Vector Dimensions: 128
Memory Usage: < 10MB
Dependencies: None (Python stdlib)
Storage Format: JSON
Max Memories: 1000 (configurable)
Query Latency: < 100ms

Standard Version

Vector Dimensions: 384
Memory Usage: 50-100MB
Dependencies: sentence-transformers, numpy
Storage Format: NumPy + JSON
Model Size: ~50MB download
Query Latency: < 50ms

Configuration

Edit these parameters in the code:

self.config = {
    "vector_dim": 128,        # Vector dimensions
    "max_memory_size": 1000,  # Max number of memories
    "chunk_size": 500,        # Content chunk size
    "min_keyword_len": 2,     # Minimum keyword length
}

Troubleshooting

No results found

# Lower the threshold
results = memory.search_memory(query, threshold=0.2)  # Default 0.3

# Increase top_k
results = memory.search_memory(query, top_k=10)  # Default 5

Memory limit reached

The system automatically removes oldest memories when limit is reached.

To increase limit:

self.config["max_memory_size"] = 5000  # Increase from 1000

Slow performance

  • Use Edge version instead of Standard
  • Reduce max_memory_size
  • Use keyword pre-filtering (automatic)

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a Pull Request

License

MIT License - See LICENSE file for details.

Acknowledgments

  • Built for the OpenClaw ecosystem
  • Optimized for edge computing devices
  • Inspired by long-term memory systems in AI

Not an official OpenClaw or Moonshot AI product.

Users must provide their own OpenClaw workspace and API keys.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…