Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Memory Manager

vv3.5.5

OpenClaw专用三层AI记忆管理系统。管理临时记忆(L1)/长期记忆(L2)/永久记忆(L3),支持向量语义搜索、自动压缩、OpenClaw用户身份识别和跨设备同步。

0· 75·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for pupper0601/pupper0601-memory-manager.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Memory Manager" (pupper0601/pupper0601-memory-manager) from ClawHub.
Skill page: https://clawhub.ai/pupper0601/pupper0601-memory-manager
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python, git
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install pupper0601-memory-manager

ClawHub CLI

Package manager switcher

npx clawhub@latest install pupper0601-memory-manager
Security Scan
Capability signals
Requires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The codebase (Python scripts, embedding backends, sync, install.sh) matches the described memory-manager functionality (L1/L2/L3 memories, embeddings, GitHub sync). Requested binaries (python, git) and optional env vars (OPENAI_API_KEY, SILICONFLOW_API_KEY, ZHIPU_API_KEY, GITHUB_TOKEN) are reasonable for this purpose. However the registry summary/metadata claims 'required env vars: none' and 'No install spec — instruction-only skill' while the package includes many code files and SKILL.md declares EMBED_BACKEND as required and even contains an install script. This mismatch in metadata vs. actual files is a coherence issue.
!
Instruction Scope
Runtime instructions and code explicitly read multi-user data (users/{uid}/profile.md and entire users/ tree), run git pull/push, and by default the installer will write API keys into shell RC files unless --no-shell-rc is used. The skill's 'read_when' indicates it will read the memory repo at session start. Reading other users' profile.md and automatically persisting credentials are privacy-sensitive actions and broaden the skill's scope beyond just answering queries.
!
Install Mechanism
An install.sh is included and SKILL.md shows curl | bash installation examples (raw.githubusercontent.com). While GitHub raw URLs are common, piping remote scripts to bash is inherently risky. install.sh installs Python packages (openai, numpy, optional lancedb), modifies shell RC files by default to persist API keys, and creates filesystem layout under ~/.openclaw or ~/.memory-manager — these are expected for this project but represent an elevated installation risk that requires manual review before running.
Credentials
Requested environment variables (EMBED_BACKEND required; optional API keys and GITHUB_TOKEN) are proportional to a memory manager that calls external embedding services and can sync to GitHub. Concerns: (1) SKILL.md and code will persist API keys into shell RC or config files (e.g., ~/.memory-manager/config.json), which stores keys unencrypted; (2) registry metadata stated 'required env vars: none' while SKILL.md requires EMBED_BACKEND (inconsistency). Avoid providing GITHUB_TOKEN unless you trust the repo.
Persistence & Privilege
The catalog flags show always:false (good). SKILL.md metadata includes an 'install' script and 'auto_enable': true (in-SKILL metadata), and 'read_when' indicates the skill will be used at session start — combined with default platform behavior (agent can invoke skills autonomously), this means the skill may be called automatically to read memory on session start. This is not an outright privilege escalation, but the combination of auto-read + file access + ability to modify shell RCs makes it more impactful; you should confirm whether auto-enable/auto-run behavior is desired.
What to consider before installing
Key things to consider before installing or enabling this skill: - Source verification: The registry lists 'source: unknown' yet project files reference a GitHub repo. Confirm the repo origin and maintainer trustworthiness (inspect the upstream GitHub repository, commit history, and open issues). Do not install from an untrusted raw URL. - Review install.sh and SKILL.md: Do not run curl | bash blindly. Download the installer and review its contents (install.sh) locally. Prefer manual installation (git clone + inspect + pip install -r requirements.txt) and use the --no-shell-rc option to prevent automatic modification of your shell files. - Protect API keys and tokens: The installer may write API keys into shell RC files or plaintext config files (~/.memory-manager/config.json). Prefer setting EMBED_BACKEND and API keys as environment variables in a controlled way (or use a secrets manager). Avoid providing a GITHUB_TOKEN with broad scopes; if you must, create a token limited to the repository and actions needed. - Restrict data scope and run in isolation first: The skill reads users/*/profile.md and other users' memory files and will perform git operations. If this device contains other users or sensitive files, consider running the skill in an isolated account, VM, or container and set MM_BASE_DIR to a directory you control. - Disable auto-run / auto-enable if possible: If the platform allows, avoid enabling automatic session-start reads until you confirm behavior. If SKILL.md's 'auto_enable' or 'read_when' behavior is configurable, turn off auto-sync and run operations manually at first. - Audit runtime behavior: After installation, inspect created config files (~/.memory-manager, ~/.openclaw/memory), check what environment variables were added to your shell rc, and inspect any cron/systemd jobs or background processes the installer creates (none obvious in the provided files, but verify). - Run tests and dry-run: Use the included tests and the tool's --dry-run options (or run in a sandbox) to verify embedding/sync behavior without pushing data to remote services. - If in doubt, decline GITHUB_TOKEN and avoid entering API keys interactively during install; set EMBED_BACKEND to a 'keyword-only' fallback or use provider keys you can revoke quickly. Given the code matches the advertised functionality but contains privacy-sensitive behavior and metadata inconsistencies, proceed only after the manual reviews above or run in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython, git
aivk970rfz4teghb0ge0z9en6267584d85tlatestvk970rfz4teghb0ge0z9en6267584d85tmemoryvk970rfz4teghb0ge0z9en6267584d85topenclawvk970rfz4teghb0ge0z9en6267584d85t
75downloads
0stars
4versions
Updated 2w ago
vv3.5.5
MIT-0

Memory Manager — 三层记忆 + 向量搜索 + 关联记忆(v3.5)

[!IMPORTANT] AI 必须遵守MEMORY_STYLE_GUIDE.md 中定义的记忆文件规范。
任何写入记忆系统的操作,都必须遵循该规范。

[!TIP] OpenClaw 用户:请参考 README.md 中的「OpenClaw 集成安装」章节。

OpenClaw 专用安装

# 方法1: 通过 OpenClaw skill 管理器安装
# (推荐)OpenClaw 会自动处理依赖和配置

# 方法2: 手动安装到 OpenClaw workspace
cd ~/.openclaw/workspace/skills/
git clone https://github.com/Pupper0601/memory-manager.git
cd memory-manager

# 安装依赖
pip install -r requirements.txt

# 配置 OpenClaw 记忆系统
mm onboard --openclaw

# 方法3: 一键安装脚本
curl -fsSL https://raw.githubusercontent.com/Pupper0601/memory-manager/main/install.sh | bash -s -- -p "$HOME/.openclaw/memory"

多用户架构

memory-repo/
├── shared/              # 公共记忆
│   ├── daily/           # 公共临时(当天)
│   ├── weekly/          # 公共周报(本周)
│   └── permanent/      # 公共永久
└── users/{uid}/        # 私人记忆
    ├── daily/           # L1 临时
    ├── weekly/          # L2 长期
    ├── archive/         # 归档
    └── permanent/       # L3 永久

记忆归属

内容归属路径
个人当天任务/进展私人L1users/{uid}/daily/YYYY-MM-DD.md
个人本周计划/总结私人L2users/{uid}/weekly/YYYY-WNN.md
个人经验/偏好/决策私人L3users/{uid}/permanent/*.md
团队当天进展公共临时shared/daily/YYYY-MM-DD.md
团队本周摘要公共周报shared/weekly/YYYY-WNN.md
项目/技术/决策公共L3shared/permanent/*.md
行为习惯私人users/{uid}/HABITS.md

会话启动流程

⚠️ 跨设备场景:用户可能从不同设备(WorkBuddy / 企微 / 飞书 / Web)发起会话,AI 必须通过以下步骤自动解析 uid,不能假设用户 ID。

1. 自动解析 uid(按优先级):
   ① 检查 users/ 目录下的子目录列表(git pull 后可见)
      - 若只有 1 个用户目录 → 直接使用该 uid
      - 若有多个 → 读取每个 users/{uid}/profile.md 的 uid 字段
      - 根据对话上下文(用户名/签名/问候)匹配最可能的 uid
   ② 读取 .current_user 文件(本地设备快速路径)
   ③ 环境变量 MEMORY_USER_ID
   ④ 都没有 → 询问用户:"请问你是哪位?(输入你的用户ID)"

2. git pull → 同步最新记忆(确保拿到最新 profile.md 等文件)

3. ⚠️ 版本检查(必做):
   - 读取记忆仓库根目录的 SKILL_VERSION.md
   - 提取 required_skill_version 字段
   - 与本 SKILL.md 顶部的 version 字段对比
   - 若本地版本 < required_skill_version:
     ⛔ 立即提示用户:
     "⚠️ memory-manager skill 需要更新!
      当前版本: {local_version},要求版本: {required_version}
      请运行:git -C ~/.workbuddy/skills/memory-manager pull
      或查看 Release:https://github.com/Pupper0601/memory-manager/releases"
   - 版本满足则静默继续

4. 读取 users/{uid}/profile.md → 确认身份
5. 读取当天 L1 文件(如存在)
6. 读取 users/{uid}/HABITS.md
7. 读取 shared/permanent/(如上下文相关)
8. 如有搜索需求 → 使用 memory_search.py

uid 自动解析脚本

import os

def resolve_uid(repo_dir: str) -> str:
    """跨设备自动解析当前用户 uid"""
    
    # 方式 1:读取本地 .current_user(本设备快速路径)
    current_user_file = os.path.join(repo_dir, ".current_user")
    if os.path.exists(current_user_file):
        with open(current_user_file) as f:
            uid = f.read().strip()
        if uid:
            return uid
    
    # 方式 2:扫描 users/ 目录下的 profile.md(跨设备路径)
    users_dir = os.path.join(repo_dir, "users")
    if os.path.exists(users_dir):
        candidates = []
        for name in os.listdir(users_dir):
            profile = os.path.join(users_dir, name, "profile.md")
            if os.path.exists(profile):
                candidates.append(name)
        if len(candidates) == 1:
            return candidates[0]   # 单用户直接返回
        # 多用户 → 返回列表让 AI 选择
        return f"[多用户: {', '.join(candidates)}]"
    
    # 方式 3:环境变量
    return os.environ.get("MEMORY_USER_ID", "")

写入规则

⚠️ 严格遵守 MEMORY_STYLE_GUIDE.md 规范!

  • 命名规范{类型前缀}_{简短描述}_{日期}.md,如 idea_autosave_v2_20260404.md
  • 必须字段type, created, updated, tags, scope, importance
  • Frontmatter:每个记忆文件开头必须有完整的 YAML frontmatter
  • 内容模板:使用规范中定义的模板(idea/dec/learn/daily)
  • 文件大小:单个文件不超过 10KB
  • 禁止内容:禁止记录密码、密钥、Token、他人隐私信息
  • 公共记忆:写入 shared/permanent/ + git pull + push
  • 私人记忆:写入 users/{uid}/ + git add + commit + push
  • 压缩触发:L1 >150行 / L2 >200行 / L3 >300行 → python scripts/memory_compress.py --uid {uid} --upgrade

写入前自检

# 写入前运行检查
mm lint --path {新文件路径}

升级标记

标记动作
[IMPORTANT]升级到 L2
[PERMANENT] / [升级L3]升级到 L3
[HABIT]提取到 HABITS.md

向量搜索

# 首次:生成向量库
mm embed --rebuild

# 语义搜索(直接输入搜索内容)
mm search "我上周做了什么"

# 搜索公共记忆
mm search "团队进展" --shared

# 关键词 fallback(无 API 依赖)
mm search "memory" --keyword-only

# 调整语义权重(默认 0.6)
# 0.8 = 纯语义相似度,0.3 = 重要性优先
python scripts/memory_search.py --uid pupper --query "重要决策" --semantic-weight 0.3

# 查看向量库统计
mm stats

# 重建 LanceDB HNSW 索引(加速搜索 100x)
python scripts/memory_search.py --uid pupper --rebuild-index

# 显示缓存统计
python scripts/memory_search.py --uid pupper --cache-stats

性能优化:安装 LanceDB 可获得 100x 搜索加速:pip install lancedb

快捷命令

  • "记住这个"mm log "内容"
  • "同步记忆" → git pull + push
  • "查找记忆 [词]"mm search "[词]"
  • "记忆报告"mm insight
  • "AI 总结"mm insight --weekly
  • "生成向量"mm embed

AI 洞察

# 综合洞察
mm insight

# 每日洞察
mm insight --daily

# 每周洞察
mm insight --weekly

💡 推荐:使用 memory-agent skill,无需命令,开口即搜。

核心原则

  1. 身份优先:操作前确认 uid,公私绝不混写
  2. 同步为先:写入前 pull,写入后立即 push
  3. 冲突保守:公共冲突不覆盖,标记待人工处理
  4. 向量优先:搜索优先向量语义,关键词作 fallback

完整文档见 reference.md

Comments

Loading comments...