Memory Augment

v1.0.0

Long-term memory system for OpenClaw agents. Store, retrieve, and query conversation history and learned information across sessions.

0· 69·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for indigas/claw-memory-augment.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Memory Augment" (indigas/claw-memory-augment) from ClawHub.
Skill page: https://clawhub.ai/indigas/claw-memory-augment
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install claw-memory-augment

ClawHub CLI

Package manager switcher

npx clawhub@latest install claw-memory-augment
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, SKILL.md, README and the provided Python code all implement a local memory store (store/search/list/delete/export/import/summarize) backed by a YAML file under ~/.memory-augment. There are no unrelated env vars, binaries, or external endpoints requested — the requirements match the stated purpose.
Instruction Scope
SKILL.md instructs the agent to auto-inject relevant memories before turns and shows commands that read/write local storage (~/.memory-augment). That matches the feature set, but auto-injection means stored memories will be added to agent context automatically — a privacy/vector concern if users store sensitive data despite the warning in the docs.
Install Mechanism
There is no install spec in the skill bundle (instruction-only install suggested via 'npx clawhub install'), and the code files are included. No network downloads, package installs, or third-party URLs are pulled by the skill itself — low install risk.
Credentials
The skill declares no required environment variables or credentials. It reads/writes files under the user's home directory (~/.memory-augment) which is expected for a local memory store; no unrelated credentials or config paths are requested.
Persistence & Privilege
always:false and user-invocable:true are appropriate. The skill enables auto-injection (auto_inject enabled by default in config), meaning the agent may include stored memories in prompts autonomously — this increases blast radius for accidental data leakage but is consistent with the feature. No evidence the skill modifies other skills or system-wide settings.
Assessment
This skill appears to do what it says: a local, file-backed memory store with semantic-style search. Important things to consider before installing: - Privacy: memories are stored as plain YAML under ~/.memory-augment/storage.yaml (unencrypted). Do not store passwords, API keys, PII, or other secrets. The README warns this, but the storage is not encrypted by default. - Auto-injection: the skill can automatically inject memories into the agent's context before turns. That can expose stored content to any prompt/model the agent uses. Disable auto_inject or tighten triggers if you want stricter control. - Local files: review or back up ~/.memory-augment storage and config; file permissions matter (restrict to your user). Consider adding encryption-at-rest or encrypted export/import if you need confidentiality. - Audit/config: verify config.yaml defaults and adjust default_expiry, max_memories, and auto_inject settings to match your privacy needs. - Code review: the included Python code is self-contained and does not call network endpoints, but you may review scripts/memory.py to confirm behavior and edge cases (date parsing, expiry behavior) before use. If you need persistence with encryption or cloud sync, this skill does not provide that yet and would require modifications. Otherwise the package is internally consistent and proportionate to its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bd56bj2s207x23yzp7bqym1850ke9
69downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Memory Augment Skill

Provide long-term memory for OpenClaw agents. Store conversation history, learned facts, preferences, and context that persists across sessions.

Quick Start

# Install via clawhub
npx clawhub install memory-augment

# Trigger
"Remember that I prefer Python for automation scripts"
"Find all notes about my workspace setup"

Core Features

1. Long-Term Storage

Store any information that should persist:

  • User preferences: Coding style, workspace config, tool choices
  • Learned facts: Project details, technical decisions, patterns
  • Conversation history: Context from past sessions, decisions made
  • Task tracking: Todo items, progress, completed work

2. Semantic Search

Find stored information using natural language:

clawhub memory search "what did I decide about the inbox triage skill?"

3. Automatic Context Injection

Before each turn, automatically inject relevant memories:

{
  "context": {
    "recent_memories": [
      {"topic": "income", "content": "User approved inbox-triage for publishing"},
      {"topic": "workspace", "content": "OpenClaw running on marekserver"}
    ],
    "preferences": {
      "model": "local/qwen3.5-35B-A3B",
      "compute_tracked": true
    }
  }
}

4. Memory Expiry & Archiving

  • Temporary memories: Auto-expire after 7 days (session notes)
  • Permanent memories: Never expire (user preferences, core facts)
  • Archival: Compress old memories to reduce token usage

When to Use This Skill

✅ Need to remember user preferences across sessions ✅ Track conversation context over time ✅ Store learnings and decisions for future reference ✅ Query past information semantically ✅ Maintain agent personality and behavior consistency

❌ Not for storing sensitive data (passwords, API keys) ❌ Not for real-time data (current weather, live prices) ❌ Not for replacing database storage (structured data)

How It Works

Storage Layer

# ~/.memory-augment/storage.yaml
memories:
  - id: uuid-123
    content: "User prefers Python for automation"
    type: preference
    tags: ["coding", "python", "automation"]
    created: "2026-04-15T10:00:00Z"
    expires: null  # permanent
    score: 0.85   # confidence/relevance

  - id: uuid-124
    content: "Approved inbox-triage skill for publishing"
    type: decision
    tags: ["income", "skills", "approval"]
    created: "2026-04-15T20:37:00Z"
    expires: "2026-04-22T20:37:00Z"  # 7 days
    score: 0.95

Retrieval System

Uses hybrid search (keyword + semantic):

  1. Parse query for keywords
  2. Calculate relevance scores
  3. Return top-K relevant memories
  4. Inject into agent context

Scoring Algorithm

Memories are scored based on:

  • Recency: Newer = higher score
  • Tags match: Query tags vs memory tags
  • Type relevance: Preferences > decisions > context
  • Score boost: User-corrected memories boost their own score

Configuration

# ~/.memory-augment/config.yaml
storage:
  path: ~/.memory-augment/storage.yaml
  format: yaml  # or json

settings:
  max_memories: 1000
  default_expiry: 7  # days
  score_decay: 0.95  # daily decay factor
  
search:
  top_k: 20
  min_score: 0.3
  include_tags: true

auto_inject:
  enabled: true
  max_tokens: 5000
  inject_before: ["each_turn", "weekly_summary"]

Memory Types

Preference

User preferences, preferences, coding style, tool choices.

type: preference
tags: ["coding", "style"]
content: "Prefers concise code over comments"

Decision

Decisions made, approvals, blocking choices.

type: decision
tags: ["income", "skills"]
content: "Published inbox-triage to clawhub"

Context

Session context, project state, ongoing work.

type: context
tags: ["project", "setup"]
content: "Building memory-augment skill, 60% complete"

Learning

What the agent learned, patterns discovered, corrections.

type: learning
tags: ["pattern", "optimization"]
content: "Sub-agent spawning reduces context by 30%"

Commands

Store Memory

clawhub memory store "Remember my workspace is at /home/marek/.openclaw/workspace"
clawhub memory store "User prefers minimal markdown formatting" --tag preferences

Search Memories

clawhub memory search "what did I decide about income?"
clawhub memory search "all memories about skills" --tag skills

List Memories

clawhub memory list --type decision
clawhub memory list --since "2026-04-14"

Delete Memory

clawhub memory delete <uuid>
clawhub memory delete --tag "temporary" --older-than "7d"

Export/Import

clawhub memory export > memories.json
clawhub memory import < memories.json

Output Format

JSON

{
  "query": "income decisions",
  "results": [
    {
      "id": "uuid-123",
      "content": "Published inbox-triage skill",
      "score": 0.92,
      "tags": ["income", "skills"]
    }
  ],
  "total": 5,
  "took_ms": 45
}

Markdown

## Found 5 memories for "income decisions"

### 🎯 **Published inbox-triage skill** (score: 0.92)
**Type:** decision  
**Tags:** income, skills  
**Created:** 2026-04-15  
**Content:** Published inbox-triage skill to clawhub for passive income

Limitations

  • Token budget: Context injection respects 48k token ceiling
  • Search accuracy: Semantic search may miss nuanced queries
  • Privacy: Do not store sensitive data (passwords, secrets)
  • Sync: Local storage only (no cloud sync yet)
  • Expiry: Temporary memories auto-expire (configurable)

Integration

With Inbox Triage

# Inject triage context when discussing messages
auto_inject:
  triggers:
    - "inbox"
    - "messages"
    - "notification"
  memories:
    - "inbox-triage skill is complete and ready for publishing"

With Cron Manager

# Weekly memory summary
cron:
  schedule: "0 0 * * 0"  # Sunday midnight
  action: "memory summarize --output weekly-summary.md"

With Weather Alert

# Memory context for weather queries
auto_inject:
  triggers:
    - "weather"
    - "forecast"
  memories:
    - "User is in UTC timezone"
    - "Prefers concise weather summaries"

Iteration

Track search quality:

# Correct a bad search result
echo "CORRECT: uuid-123 - relevant to income query" >> ~/.memory-augment/corrections.log
echo "INCORRECT: uuid-124 - should not have matched" >> ~/.memory-augment/corrections.log

The system learns from corrections to improve scoring.


Roadmap

  • Basic storage system
  • Semantic search implementation
  • Automatic context injection
  • Multi-source sync (cloud backup)
  • Encrypted storage for sensitive data
  • Collaborative memories (shared between agents)

Built for the OpenClaw ecosystem.

Comments

Loading comments...