Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lovefromio Elite Longterm Memory

v1.2.5

Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw...

0· 142·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lovefromio/lovefromio-elite-longterm-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lovefromio Elite Longterm Memory" (lovefromio/lovefromio-elite-longterm-memory) from ClawHub.
Skill page: https://clawhub.ai/lovefromio/lovefromio-elite-longterm-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: OPENAI_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install lovefromio/lovefromio-elite-longterm-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install lovefromio-elite-longterm-memory
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (long-term memory with WAL, LanceDB, git-notes, cloud backup) aligns with included files and the CLI (bin/elite-memory.js) which creates SESSION-STATE.md, MEMORY.md and daily logs. Requesting OPENAI_API_KEY is reasonable for the LanceDB/OpenAI provider. Minor inconsistencies: registry metadata lists no homepage/source while package.json/README reference a GitHub repo and SKILL.md/package.json versions differ slightly from meta.json/registry version—this mismatch reduces provenance confidence but doesn't itself indicate malicious intent.
!
Instruction Scope
SKILL.md instructs creating files in the workspace (expected) but also explicitly instructs editing global agent config files (~/.openclaw/openclaw.json and ~/.clawdbot/clawdbot.json) to enable memorySearch/plugins. Those are system-wide agent settings outside the skill's own directory and changing them can affect other skills and agent behavior. The doc also recommends optional integrations (Mem0, SuperMemory) that would transmit extracted facts/decisions to external services. SKILL.md references additional env vars (MEM0_API_KEY, SUPERMEMORY_API_KEY) and external CLI tools that are not declared as required—these are optional but relevant because they enable sending memory off-host.
Install Mechanism
There is no install spec that downloads arbitrary archives; this is instruction-first and includes a small CLI script (bin/elite-memory.js) and package.json with optionalDependency mem0ai. No suspicious remote download URLs or extract steps were found. The install surface is limited to standard npm usage if the user chooses to install the package or optional deps.
Credentials
The skill declares a single required env var OPENAI_API_KEY which matches the documented use (OpenAI provider for LanceDB memory search). However SKILL.md and README also instruct use of MEM0_API_KEY and SUPERMEMORY_API_KEY for optional cloud integrations — these are not listed as required but, if provided, allow uploading conversation-derived facts to third parties. The bin script reads HOME and checks paths (e.g. ~/.clawdbot/memory/lancedb) but does not read additional secrets; overall env/credential requests are proportional but optional cloud keys introduce privacy risk.
!
Persistence & Privilege
always:false (good). But the skill's runtime instructions encourage modifying agent-global configuration files to enable memorySearch and plugins, which effectively grants the memory system persistent, cross-agent influence if the user applies those changes. The skill itself does not appear to autonomously modify those configs (it gives instructions to the user), but recommending edits to other skills' configs is a privilege escalation vector that users should consciously approve.
What to consider before installing
This package appears to implement a local, file-based long-term memory system (creates SESSION-STATE.md, MEMORY.md and daily logs) and its CLI is small and readable. However before installing or providing API keys you should: 1) Confirm provenance — the registry metadata lacks a homepage while package.json/README mention a GitHub repo; open that repo and verify code/maintainer. 2) Review the small CLI (bin/elite-memory.js) yourself — it only writes/reads markdown files and checks HOME for paths, but verify it matches the published source. 3) Be careful about editing global agent config files (~/.openclaw/openclaw.json, ~/.clawdbot/clawdbot.json) — doing so will change agent-wide behavior and could expose chat context to integrated services. 4) Treat optional cloud integrations (Mem0, SuperMemory) as sensitive: only supply MEM0_API_KEY or SUPERMEMORY_API_KEY if you trust the vendor and understand their data retention/usage policies, because they will receive extracted facts/decisions. 5) If you need stronger guarantees, run this in an isolated workspace or container, avoid enabling cloud backups, and audit any optional dependencies before installing. If you want, I can: (a) point out exact lines in the files that send or log data, (b) help craft safer configuration instructions that avoid touching global agent configs, or (c) fetch and summarize the referenced GitHub repo to improve provenance confidence.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
EnvOPENAI_API_KEY
latestvk971kpdwpz801h1qf6cnj38ke585nfgt
142downloads
0stars
2versions
Updated 4h ago
v1.2.5
MIT-0

Elite Longterm Memory 🧠

The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.

Never lose context. Never forget decisions. Never repeat mistakes.

Architecture Overview

┌─────────────────────────────────────────────────────────────────┐
│                    ELITE LONGTERM MEMORY                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐             │
│  │   HOT RAM   │  │  WARM STORE │  │  COLD STORE │             │
│  │             │  │             │  │             │             │
│  │ SESSION-    │  │  LanceDB    │  │  Git-Notes  │             │
│  │ STATE.md    │  │  Vectors    │  │  Knowledge  │             │
│  │             │  │             │  │  Graph      │             │
│  │ (survives   │  │ (semantic   │  │ (permanent  │             │
│  │  compaction)│  │  search)    │  │  decisions) │             │
│  └─────────────┘  └─────────────┘  └─────────────┘             │
│         │                │                │                     │
│         └────────────────┼────────────────┘                     │
│                          ▼                                      │
│                  ┌─────────────┐                                │
│                  │  MEMORY.md  │  ← Curated long-term           │
│                  │  + daily/   │    (human-readable)            │
│                  └─────────────┘                                │
│                          │                                      │
│                          ▼                                      │
│                  ┌─────────────┐                                │
│                  │ SuperMemory │  ← Cloud backup (optional)     │
│                  │    API      │                                │
│                  └─────────────┘                                │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

The 5 Memory Layers

Layer 1: HOT RAM (SESSION-STATE.md)

From: bulletproof-memory

Active working memory that survives compaction. Write-Ahead Log protocol.

# SESSION-STATE.md — Active Working Memory

## Current Task
[What we're working on RIGHT NOW]

## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...

## Pending Actions
- [ ] ...

Rule: Write BEFORE responding. Triggered by user input, not agent memory.

Layer 2: WARM STORE (LanceDB Vectors)

From: lancedb-memory

Semantic search across all memories. Auto-recall injects relevant context.

# Auto-recall (happens automatically)
memory_recall query="project status" limit=5

# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9

Layer 3: COLD STORE (Git-Notes Knowledge Graph)

From: git-notes-memory

Structured decisions, learnings, and context. Branch-aware.

# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h

# Retrieve context
python3 memory.py -p $DIR get "frontend"

Layer 4: CURATED ARCHIVE (MEMORY.md + daily/)

From: OpenClaw native

Human-readable long-term memory. Daily logs + distilled wisdom.

workspace/
├── MEMORY.md              # Curated long-term (the good stuff)
└── memory/
    ├── 2026-01-30.md      # Daily log
    ├── 2026-01-29.md
    └── topics/            # Topic-specific files

Layer 5: CLOUD BACKUP (SuperMemory) — Optional

From: supermemory

Cross-device sync. Chat with your knowledge base.

export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."

Layer 6: AUTO-EXTRACTION (Mem0) — Recommended

NEW: Automatic fact extraction

Mem0 automatically extracts facts from conversations. 80% token reduction.

npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });

// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });

Benefits:

  • Auto-extracts preferences, decisions, facts
  • Deduplicates and updates existing memories
  • 80% reduction in tokens vs raw history
  • Works across sessions automatically

Quick Setup

1. Create SESSION-STATE.md (Hot RAM)

cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md — Active Working Memory

This file is the agent's "RAM" — survives compaction, restarts, distractions.

## Current Task
[None]

## Key Context
[None yet]

## Pending Actions
- [ ] None

## Recent Decisions
[None yet]

---
*Last updated: [timestamp]*
EOF

2. Enable LanceDB (Warm Store)

In ~/.openclaw/openclaw.json:

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai",
    "sources": ["memory"],
    "minScore": 0.3,
    "maxResults": 10
  },
  "plugins": {
    "entries": {
      "memory-lancedb": {
        "enabled": true,
        "config": {
          "autoCapture": false,
          "autoRecall": true,
          "captureCategories": ["preference", "decision", "fact"],
          "minImportance": 0.7
        }
      }
    }
  }
}

3. Initialize Git-Notes (Cold Store)

cd ~/clawd
git init  # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start

4. Verify MEMORY.md Structure

# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory

5. (Optional) Setup SuperMemory

export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence

Agent Instructions

On Session Start

  1. Read SESSION-STATE.md — this is your hot context
  2. Run memory_search for relevant prior context
  3. Check memory/YYYY-MM-DD.md for recent activity

During Conversation

  1. User gives concrete detail? → Write to SESSION-STATE.md BEFORE responding
  2. Important decision made? → Store in Git-Notes (SILENTLY)
  3. Preference expressed?memory_store with importance=0.9

On Session End

  1. Update SESSION-STATE.md with final state
  2. Move significant items to MEMORY.md if worth keeping long-term
  3. Create/update daily log in memory/YYYY-MM-DD.md

Memory Hygiene (Weekly)

  1. Review SESSION-STATE.md — archive completed tasks
  2. Check LanceDB for junk: memory_recall query="*" limit=50
  3. Clear irrelevant vectors: memory_forget id=<id>
  4. Consolidate daily logs into MEMORY.md

The WAL Protocol (Critical)

Write-Ahead Log: Write state BEFORE responding, not after.

TriggerAction
User states preferenceWrite to SESSION-STATE.md → then respond
User makes decisionWrite to SESSION-STATE.md → then respond
User gives deadlineWrite to SESSION-STATE.md → then respond
User corrects youWrite to SESSION-STATE.md → then respond

Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.

Example Workflow

User: "Let's use Tailwind for this project, not vanilla CSS"

Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it — Tailwind it is..."

Maintenance Commands

# Audit vector memory
memory_recall query="*" limit=50

# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart

# Export Git-Notes
python3 memory.py -p . export --format json > memories.json

# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/

Why Memory Fails

Understanding the root causes helps you fix them:

Failure ModeCauseFix
Forgets everythingmemory_search disabledEnable + add OpenAI key
Files not loadedAgent skips reading memoryAdd to AGENTS.md rules
Facts not capturedNo auto-extractionUse Mem0 or manual logging
Sub-agents isolatedDon't inherit contextPass context in task prompt
Repeats mistakesLessons not loggedWrite to memory/lessons.md

Solutions (Ranked by Effort)

1. Quick Win: Enable memory_search

If you have an OpenAI key, enable semantic search:

openclaw configure --section web

This enables vector search over MEMORY.md + memory/*.md files.

2. Recommended: Mem0 Integration

Auto-extract facts from conversations. 80% token reduction.

npm install mem0ai
const { MemoryClient } = require('mem0ai');

const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Auto-extract and store
await client.add([
  { role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });

// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });

3. Better File Structure (No Dependencies)

memory/
├── projects/
│   ├── strykr.md
│   └── taska.md
├── people/
│   └── contacts.md
├── decisions/
│   └── 2026-01.md
├── lessons/
│   └── mistakes.md
└── preferences.md

Keep MEMORY.md as a summary (<5KB), link to detailed files.

Immediate Fixes Checklist

ProblemFix
Forgets preferencesAdd ## Preferences section to MEMORY.md
Repeats mistakesLog every mistake to memory/lessons.md
Sub-agents lack contextInclude key context in spawn task prompt
Forgets recent workStrict daily file discipline
Memory search not workingCheck OPENAI_API_KEY is set

Troubleshooting

Agent keeps forgetting mid-conversation: → SESSION-STATE.md not being updated. Check WAL protocol.

Irrelevant memories injected: → Disable autoCapture, increase minImportance threshold.

Memory too large, slow recall: → Run hygiene: clear old vectors, archive daily logs.

Git-Notes not persisting: → Run git notes push to sync with remote.

memory_search returns nothing: → Check OpenAI API key: echo $OPENAI_API_KEY → Verify memorySearch enabled in openclaw.json


Links


Built by @NextXFrontier — Part of the Next Frontier AI toolkit

Comments

Loading comments...