Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

memory-persistence

v1.0.1

Multi-backend memory system with optional embedding, private/shared memories, conversation summarization, and maintenance tools. For AI agents to store and r...

0· 118·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 529279917/memory-persistence.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "memory-persistence" (529279917/memory-persistence) from ClawHub.
Skill page: https://clawhub.ai/529279917/memory-persistence
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install memory-persistence

ClawHub CLI

Package manager switcher

npx clawhub@latest install memory-persistence
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code and SKILL.md implement a memory system with optional embeddings and GitHub/Gitee backends — this matches the skill name and description. However, the registry metadata declares no required environment variables while the README and SKILL.md explicitly reference GITHUB_TOKEN, GITEE_TOKEN and shared backend tokens; that omission is an incoherence. Also the summarizer claims to 'auto-detect OpenClaw model' which implies reading agent configuration or contacting an LLM provider — capability that should have been declared.
!
Instruction Scope
The SKILL.md instructs installing heavy packages (sentence-transformers, scikit-learn, numpy) and indicates the embedding model will be auto-downloaded on first use. It also documents using GitHub/Gitee tokens and 'auto-detect OpenClaw model' for summarization. Those instructions allow network access and model downloads and may read agent/config state; the skill instructions do not clearly limit or disclose those behaviors to the registry metadata.
Install Mechanism
No formal install spec in the registry (instruction-only), but SKILL.md instructs pip installing large dependencies and embedding models are auto-downloaded at runtime. That is common for embedding tooling but increases runtime network activity and disk usage; no packaged release URL or validated installer is provided.
!
Credentials
Registry lists no required environment variables, yet config.yaml and SKILL.md reference multiple token env names (GITHUB_TOKEN, GITEE_TOKEN, SHARED_GITHUB_TOKEN, SHARED_GITEE_TOKEN, etc.). Requesting repository tokens is reasonable for GitHub/Gitee backends, but the omission from declared requirements is a mismatch and reduces transparency. The number of potential secret envs is significant relative to a local-memory convenience tool; you should only provide tokens when you intentionally use remote backends.
Persistence & Privilege
The skill does not request 'always: true' and uses the normal agent invocation model. It writes/reads local directories (./memory_data, ./shared_memory, sqlite files) and can push/pull to remote git hosting via provided tokens. That file-system and network persistence is consistent with a memory/storage tool but increases blast radius if remote tokens are supplied.
What to consider before installing
This package appears to implement the advertised memory system, but it has two important transparency issues to resolve before use: (1) it expects GitHub/Gitee tokens (and separate shared-repo tokens) though the registry doesn't declare them — only provide these secrets if you intend to use a remote backend and understand the permissions; (2) embeddings and summarization will download models and may call external LLM providers or read agent configuration (the README mentions auto-detecting OpenClaw model). To reduce risk: run it in an isolated environment, prefer the local or sqlite backend until you've reviewed storage/github.py and summarizer.py to confirm what remote operations and config reads are performed, avoid supplying broad-scoped repo tokens (use least-privilege PATs scoped to a single repo), and review any network activity/logging during a trial run. If you want, I can inspect storage/github.py and summarizer.py lines specifically for network endpoints, auth usage, and any code that reads system/agent config to give a higher-confidence verdict.

Like a lobster shell, security has layers — review code before you run it.

latestvk97atwr5aq0gqexbrb3w64ywkh83nxcc
118downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

🧠 Memory System

A flexible memory system for AI agents with optional embedding support and multiple storage backends.

Features

  • Private & Shared Memories - Private by default, shared memories for multi-agent collaboration
  • Embedding Search - Semantic search using sentence-transformers
  • Multiple Backends - Local file / SQLite / GitHub / Gitee
  • LLM Summarization - Auto-extract key info from conversations
  • Memory Maintenance - Review, consolidate, tag suggestions
  • Templates - Quick memory creation with templates

Installation

pip install sentence-transformers scikit-learn pyyaml numpy

Quick Start

Python API

from memory_system import MemoryManager

# Initialize (local storage)
mm = MemoryManager(backend='local')

# Add 
mm.add("User prefers dark theme", tags=["preference"])

# Search
results = mm.search("dark theme preference")

# List
entries = mm.list(tags=["preference"])

CLI

# Add 
python3 memory_cli.py add "User feedback: slow page load" --tags "bug,performance"

# List
python3 memory_cli.py list

# Search
python3 memory_cli.py -e search "performance issue"

# Semantic search (with embedding)
python3 memory_cli.py -e search "dark mode"

Private vs Shared Memory

TypeStorageAccessUse Case
Private./memory_data/Current agent onlyUser preferences, personal notes
Shared./shared_memory/All agentsTeam decisions, collaboration

Default: All memories are private. Use shared add only when other agents need to know.

# Private memory - user says "remember..."
mm.add("User name is Zhang San")

# Shared memory - user says "tell other agents..."
smm.add("Team decision: use React", agent_id="agent_a")

Storage Backends

Local (Default)

mm = MemoryManager(backend='local')

SQLite (High Performance)

mm = MemoryManager(backend='sqlite', base_path='./memory.db')

GitHub

export GITHUB_TOKEN="your_token"
mm = MemoryManager(
    backend='github',
    repo='owner/repo',
    branch='main'
)

Gitee

export GITEE_TOKEN="your_token"
mm = MemoryManager(
    backend='gitee',
    repo='owner/repo',
    branch='master'
)

Embedding & Semantic Search

Embedding is optional and auto-downloads on first use.

# Enable embedding
mm = MemoryManager(backend='local', use_embedding=True)

# Add (auto-generates vector)
mm.add("User works from 9am to 6pm")

# Semantic search - finds similar content
results = mm.search("what time does user work")

CLI with embedding:

python3 memory_cli.py -e search "working hours"

Shared Memory (Multi-Agent)

from memory_system import SharedMemoryManager

# Initialize
smm = SharedMemoryManager(backend='local', shared_path='./shared_memory')

# Add shared memory (from an agent)
smm.add("Bug #123 fixed", agent_id='agent_b')

# List shared memories
shared = smm.list()

# By agent
by_agent = smm.get_by_agent('agent_b')

CLI:

# Add shared 
python3 memory_cli.py shared add "Team decision: use Vue" --agent "agent_a"

# List
python3 memory_cli.py shared list

# Search
python3 memory_cli.py -e shared search "Vue decision"

Conversation Summarization

Auto-extract key information from conversation history.

from memory_system import MemoryManager, MemorySummarizer, ConversationMemoryProcessor

mm = MemoryManager(use_embedding=True)
summarizer = MemorySummarizer()  # Auto-detects OpenClaw model
processor = ConversationMemoryProcessor(mm, summarizer, auto_save=True)

conversation = """
User: I prefer dark theme
Assistant: Changed to dark theme
User: Page loads slowly
Assistant: Optimized images
"""

memories = processor.process(conversation)

CLI:

python3 memory_cli.py summarize --file conversation.txt --save

Memory Maintenance

# Generate report
python3 memory_cli.py maintenance report

# Review old memories
python3 memory_cli.py maintenance review --days 7

# Find similar memories
python3 memory_cli.py maintenance consolidate

# Suggest tags for untagged memories
python3 memory_cli.py maintenance suggest-tags

# Mark as outdated
python3 memory_cli.py maintenance outdated --mark <id> --reason "expired"

Templates

Predefined formats for quick memory creation.

# List templates
python3 memory_cli.py template list

# Show template
python3 memory_cli.py template show task

# Use template
python3 memory_cli.py template use task \
  --field title="Complete report" \
  --field priority="high"

Memory Groups

Organize memories into groups.

# Add to group
python3 memory_cli.py add "work task" --tags "work" --group "work"

# List groups
python3 memory_cli.py group list

# Show group
python3 memory_cli.py group show "work"

Batch Operations

# Batch add tags
python3 memory_cli.py batch-add-tags id1,id2 --tags "important,priority"

# Batch delete (requires confirmation)
python3 memory_cli.py batch-delete id1,id2 --force

API Reference

MemoryManager

MethodDescription
add(content, tags, metadata, group)Add memory
get(id)Get by ID
delete(id)Delete
list(tags, limit, offset)List with pagination
search(query, tags, top_k, threshold)Search
batch_delete(ids)Batch delete
list_groups()List groups
export_json(filepath)Export JSON

SharedMemoryManager

MethodDescription
add(content, agent_id, tags)Add shared memory
list(tags)List shared
get_by_agent(agent_id)By agent
search(query)Search shared

Files Structure

memory_system/
├── memory_manager.py   # Core manager
├── shared_memory.py    # Shared 
├── summarizer.py      # LLM summarization
├── maintenance.py      # Maintenance tools
├── templates.py       # Templates
├── embedding.py       # Embedding handler
├── storage/           # Storage backends
│   ├── local.py
│   ├── sqlite.py
│   ├── github.py
│   └── gitee.py
└── memory_cli.py         # CLI entry (run with python3)

Configuration

config.yaml:

STORAGE_BACKEND: "local"

USE_EMBEDDING: false
EMBEDDING_MODEL: "sentence-transformers/all-MiniLM-L6-v2"

storage:
  local:
    base_path: "./memory_data"
  sqlite:
    base_path: "./memory.db"
  github:
    repo: "owner/repo"
    token_env: "GITHUB_TOKEN"
  gitee:
    repo: "owner/repo"
    token_env: "GITEE_TOKEN"

License

MIT

Comments

Loading comments...