Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Pidan Memory

v1.1.2

Local persistent vector memory system using LanceDB and Ollama for semantic search and multi-user isolated long-term AI assistant memory.

0· 376·2 current·2 all-time
byMoonCoder@2830201534

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 2830201534/pidan-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Pidan Memory" (2830201534/pidan-memory) from ClawHub.
Skill page: https://clawhub.ai/2830201534/pidan-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install 2830201534/pidan-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install pidan-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name/description align with the code: the Python/TS files implement a LanceDB + Ollama local memory system with automatic hook-based capture, storage, search, deduplication, and per-user isolation. However, registry metadata claims no required binaries/env-vars while HOOK.md, SKILL.md and the code expect python3 and a running Ollama (localhost:11434). That metadata mismatch is inconsistent and worth flagging.
!
Instruction Scope
The hook (handler.ts + auto_memory.py) will execute on message events and spawn a Python process that receives message content on stdin and may persist data under ~/.openclaw/workspace/memory. The runtime relies on the OPENCLAW_USER_ID environment variable for access control (SKILL.md and code require it), but the skill metadata did not declare this. The code reads/writes only within the ~/.openclaw workspace and calls localhost Ollama for embeddings; it does not appear to call external network endpoints for embedding or exfiltrate data. Still, automatic capture of every message has privacy implications and the hook receives process.env (handler.ts merges process.env into the child's env), so existing environment variables are available to the spawned process.
!
Install Mechanism
There is no formal install spec in the registry, but the package includes helper scripts. scripts/install_ollama.sh performs curl -fsSL https://ollama.ai/install.sh | sh on Linux (a remote installer piped to sh). scripts/download_accelerator.sh repeatedly starts and kills 'ollama pull' invocations to accelerate downloading a model. These scripts execute remote code/operations and spawn background services (ollama serve). Running curl|sh installers and repeated background process manipulation increases risk and should be reviewed/ran only from a trusted system or sandbox.
!
Credentials
The skill does not declare required environment variables in the registry, but the code and SKILL.md depend on OPENCLAW_USER_ID for authentication and optionally respect MEMORY_MODE and MEMORY_DEDUP_AFTER. handler.ts explicitly passes the full process.env into spawned Python processes, exposing any environment variables present to the child. No external API keys are requested, and network calls appear limited to localhost (Ollama) and local lancedb, which is proportionate — but the undeclared reliance on OPENCLAW_USER_ID and passing of process.env are mismatches and a modest risk.
Persistence & Privilege
always is false and the skill is user-invocable. Installing the Hook (per SKILL.md) grants the skill automatic execution on message events — this is expected for an auto-memory hook. The skill writes to ~/.openclaw/workspace/memory (its own data); it does not modify other skills or global agent configs. No 'always: true' or implicit global privileges were found.
What to consider before installing
This skill appears to implement the described local memory system, but review and consider the following before installing: - Metadata mismatch: The registry lists no required binaries/env-vars, but the hook and docs require python3 and a running Ollama (localhost:11434). The code also expects the OPENCLAW_USER_ID environment variable for permission checks. Make sure you understand and set these before enabling the hook. - Privacy: Enabling the Hook causes automatic capture of message content and writing to ~/.openclaw/workspace/memory. If you enable it, verify the data directory and retention policies, and confirm whether any sensitive content could be recorded. - Environment exposure: handler.ts passes process.env to the spawned Python process. That means any environment variables available to the host process will be visible to the script. Avoid running it in a context containing secrets you don't want exposed. - Installer scripts: The included scripts can install Ollama and pull models. The Linux installer uses curl | sh (remote install script); run such scripts only from trusted sources or inspect them first. Model downloads can be large and the download 'accelerator' script repeatedly kills and restarts downloads — this is unusual but not obviously malicious. - Recommended steps: inspect the full installer script from https://ollama.ai/install.sh before running; run the skill in a sandbox or test environment first; back up and inspect ~/.openclaw/workspace/memory before enabling automatic hooks; and confirm that OPENCLAW_USER_ID will be set by your platform as expected. If you want, I can list the exact files and lines that reference OPENCLAW_USER_ID, the curl|sh install command, and where process.env is forwarded so you can inspect them in detail.

Like a lobster shell, security has layers — review code before you run it.

latestvk979za5bf3t53ef121yt1ws24s82g4bp
376downloads
0stars
6versions
Updated 22h ago
v1.1.2
MIT-0

Pidan Memory Skill

本地持久化向量记忆系统,为 AI Assistant 提供长期记忆能力。支持多用户/共享模式。

概述

基于 LanceDB + Ollama 实现的本地向量记忆系统,支持语义搜索和多用户隔离。

架构

用户输入 → Ollama (向量化) → LanceDB (存储/搜索)
                    ↑
              nomic-embed-text (768维向量)

功能

1. 自动记忆(推荐)

安装 Hook 后自动生效,无需手动调用!

每次对话后自动评估并存储重要信息,覆盖 16 大类场景。

安装方式:

# 1. 复制文件
mkdir -p ~/.openclaw/hooks/pidan-memory
cp HOOK.md handler.ts ~/.openclaw/hooks/pidan-memory/
cp auto_memory.py ~/.openclaw/workspace/memory/

# 2. 启用
openclaw hooks enable pidan-memory
openclaw gateway restart

2. 记住信息 (remember)

手动存储重要信息到向量数据库

参数:

  • content: 记忆内容 (必填)
  • summary: 摘要 (可选)
  • importance: 重要程度 1-5 (默认 3)
  • user_id: 用户 ID (默认 default)

示例:

{
  "command": "remember",
  "parameters": {
    "content": "用户最喜欢吃火锅",
    "summary": "饮食偏好",
    "importance": 4,
    "user_id": "default"
  }
}

3. 搜索记忆 (recall)

语义向量搜索

参数:

  • query: 搜索关键词
  • limit: 返回数量 (默认 5)
  • user_id: 用户 ID

4. 获取最近记忆 (recent_memories)

获取用户的有权限访问的记忆

5. 模式管理

获取当前模式 (get_mode)

{
  "command": "get_mode",
  "parameters": {}
}

设置模式 (set_mode)

{
  "command": "set_mode",
  "parameters": {
    "mode": "private"  // 或 "shared"
  }
}

模式说明:

  • private: 多用户模式(默认),每个用户记忆独立隔离
  • shared: 共享模式,所有用户可互相查询共享记忆

6. 删除记忆 (delete_memory)

删除记忆(需二次确认,只有创建人可删除)

参数:

  • memory_id: 记忆 ID (必填)
  • confirm: 是否确认删除 (默认 false)

首次请求(获取确认):

{
  "command": "delete_memory",
  "parameters": {
    "memory_id": "uuid-xxx",
    "confirm": false
  }
}

确认删除:

{
  "command": "delete_memory",
  "parameters": {
    "memory_id": "uuid-xxx",
    "confirm": true
  }
}

权限规则:

  • ✅ 创建人本人可以删除
  • ❌ 非创建人无法删除
  • ⚠️ 删除前必须二次确认

7. 共享记忆 (share_memory)

将记忆共享给指定用户(只有创建人可以共享)

参数:

  • memory_id: 记忆 ID (必填)
  • visible_to: 可见用户列表 (默认 []) - 空=私有
  • user_id: 请求者 ID (用于权限校验)

示例 - 共享给指定用户:

{
  "command": "share_memory",
  "parameters": {
    "memory_id": "uuid-xxx",
    "visible_to": ["user_a", "user_b"],
    "user_id": "default"
  }
}

示例 - 取消共享(设为私有):

{
  "command": "share_memory",
  "parameters": {
    "memory_id": "uuid-xxx",
    "visible_to": [],
    "user_id": "default"
  }
}

权限规则:

  • ✅ 创建人本人可以共享
  • ❌ 非创建人无法共享
  • ⚠️ visible_to 为空时 = 私有模式

8. 列表记忆 (list_memories)

列出用户有权限访问的所有记忆

8. 手动去重 (deduplicate)

手动触发去重(每 20 条自动触发)

9. 统计 (stats)

获取记忆统计信息

配置

配置文件:~/.openclaw/workspace/memory/config.yaml

memory:
  mode: private              # private | shared
  deduplicate_after: 20      # 每N条自动去重

或通过环境变量:

MEMORY_MODE=private
MEMORY_DEDUP_AFTER=20

存储位置

~/.openclaw/workspace/memory/lance/  # LanceDB 数据

技术栈

组件作用
LanceDB向量存储/搜索
Ollama本地 embedding 模型
nomic-embed-text768维向量

CLI 测试

# 添加记忆
echo '{"command": "remember", "parameters": {"content": "测试"}}' | python3 run.py

# 搜索
echo '{"command": "recall", "parameters": {"query": "测试"}}' | python3 run.py

# 获取模式
echo '{"command": "get_mode", "parameters": {}}' | python3 run.py

# 设置模式
echo '{"command": "set_mode", "parameters": {"mode": "shared"}}' | python3 run.py

# 删除记忆(首次)
echo '{"command": "delete_memory", "parameters": {"memory_id": "xxx"}}' | python3 run.py

安全说明

用户身份验证

所有命令通过 环境变量 OPENCLAW_USER_ID 获取真实用户ID,防止伪造:

# 设置用户ID
export OPENCLAW_USER_ID=your_user_id
python3 run.py ...

权限控制

  • 删除/共享记忆:只有创建人可以操作
  • 查询记忆:根据模式(private/shared)决定访问权限
  • 参数中的 user_id:无效,必须通过环境变量

Hook 模式

通过 Hook 自动触发时,用户ID由平台传递( DingTalk openid 等),自动注入环境变量。

Comments

Loading comments...