Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Vector Mind Map Fusion

v1.2.4

L1→L2→L3 向量记忆融合系统。用于构建、查询和管理语义记忆图谱。当用户需要提取、加工、记忆、或检索结构化知识时触发。具体场景:(1) 用户说"记住"、"存入记忆"、"这个很重要" → L1 提取;(2) 用户问"之前有没有"、"有没有记录过"、"我的记忆里" → L2+L3 查询;(3) 用户要求"整理一下...

0· 171·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dxiaofeng0811-lgtm/vector-mind-map-fusion.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Vector Mind Map Fusion" (dxiaofeng0811-lgtm/vector-mind-map-fusion) from ClawHub.
Skill page: https://clawhub.ai/dxiaofeng0811-lgtm/vector-mind-map-fusion
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install vector-mind-map-fusion

ClawHub CLI

Package manager switcher

npx clawhub@latest install vector-mind-map-fusion
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description describe a 3-layer memory extraction/retrieval system and the code implements that: session scanner (L1), consolidator (L2) and archives/index (L3). Access to OpenClaw sessions and a local neural memory DB is expected for this purpose, but the project writes to paths outside the repository (default brain DB in ~/.local/share/neural-memory and an absolute '/workspace/fusion/...' path in the InfinityDB implementation), which is surprising and should be confirmed.
!
Instruction Scope
SKILL.md instructs scanning user session JSONL and persisting extracted contents into local DB/files—this will capture whatever is in OpenClaw sessions (including credentials or secrets users may have typed). It also tells users to run external install commands (curl | sh to install Ollama) and to pip install with '--break-system-packages'. The instructions give broad discretion to scan/ingest all sessions and to run scheduled runs (cron-like), which increases privacy risk.
Install Mechanism
There is no formal install spec, but the README/SKILL.md asks the user to curl an external installer (ollama.com/install.sh) and to 'ollama pull' large models; it also recommends 'pip install --break-system-packages httpx'. Those external-install steps are normal for requiring Ollama but carry typical network/script risks (curl | sh) and system-level implications (pip flag). The code itself is bundled, so nothing is downloaded at runtime by the skill besides using a local Ollama server.
!
Credentials
The skill declares no required env vars, but the code reads SESSIONS_DIR and NEURALMEMORY_DIR environment variables and defaults to user-local paths (~/.openclaw/... and ~/.local/share/neural-memory). That is logical for a memory skill, but the number and sensitivity of files it reads/writes is high (user session data, brain.db). The classifier also explicitly recognizes 'password/secret/token/api_key' patterns and will persist classified items into its stores unless you change behavior — so credential capture/persistence is a real risk if sessions contain secrets.
!
Persistence & Privilege
The skill persists data outside the project (brain.db in the user's home and InfinityDB files), and some defaults point to absolute paths (e.g., '/workspace/fusion/...') that differ from config constants. It uses pickle to persist/load HNSW indices (brain.hnsw), which means loading a tampered pickle file could execute code. 'always' is false and the skill is not forced into every run, but its filesystem writes and external install suggestions give it persistent on-disk presence and lasting access to stored memories.
What to consider before installing
This skill appears to implement the memory-extraction/recall system it claims, but review and precautions are recommended before installing: - Data scope: it scans your OpenClaw session files and writes persistent memory files (brain.db and InfinityDB files). If your sessions contain secrets (passwords, API keys, tokens, private notes), those may be extracted and stored. Consider running in a sandbox, or set SESSIONS_DIR to a safe test directory. - File locations: default storage includes ~/.local/share/neural-memory and an absolute '/workspace/fusion/...' path in code. Confirm and override these paths in the config before running so it doesn't write to unexpected locations. - Pickle risk: the HNSW index is saved/loaded using pickle. Loading pickled files from untrusted sources can execute arbitrary code. Only run this skill on data you control, and avoid using existing brain.hnsw files from unknown origins. - External install commands: SKILL.md asks you to run 'curl https://ollama.com/install.sh | sh' and 'pip install --break-system-packages'. 'curl | sh' and '--break-system-packages' have system-level effects—inspect the installer script and avoid the pip flag if you don't want pip to alter system packages. - Secrets handling: the l1 classifier will detect and classify 'password/secret/token/api_key' text; but there is no obvious automatic redaction before storing. If you need to avoid persisting secrets, add filtering or prevent the skill from scanning live session directories. - Actionable steps: (1) Review the code (esp. src/infinitydb_lite.py, l1_classifier and paths in config); (2) run the skill in an isolated environment or container; (3) set SESSIONS_DIR and NEURALMEMORY_DIR to controlled test directories; (4) remove or audit any existing brain.hnsw before loading; (5) avoid blindly executing the suggested curl | sh installer and the pip flag without inspection. If you want, I can point out exact lines/files that implement the session scanning, brain DB write paths, and pickle load/save calls so you can inspect them more easily.

Like a lobster shell, security has layers — review code before you run it.

latestvk971ga6m8ay930b3br3d5qb95s85ngyv
171downloads
0stars
16versions
Updated 1d ago
v1.2.4
MIT-0

Vector-Mind Map-Fusion

三层向量记忆融合系统:L1 提取 → L2 整理 → L3 检索

v1.2.0:方案A — InfinityDB 单一数据源 + 并行搜索(HNSW 语义 + 关键词字面)


核心架构

用户输入
    ↓
L1 (Session Scanner + Classifier)
    ├─ ByteOffsetScanner: 扫描 session JSONL,断点续扫
    ├─ Stage1 过滤: 噪音/UUID/cron/metadata
    ├─ Classifier: 去噪→质量检查→分类→分块→向量
    └─ 输出: L2A/ (每日增量)
    ↓
L2 (Daily Consolidator)
    ├─ 加载 L2A: 扫描所有日期文件
    ├─ session 分组 + 滑动窗口
    ├─ 四级去重: content_hash → cosine → simhash → hnsw
    ├─ session graph: N-gram 中文分词
    ├─ transitive closure: 关系补全
    └─ 输出: L2/ (每日增量)
    ↓
L3 (Biweekly Consolidator)
    ├─ 加载 L2: 写入 InfinityDB(单一数据源)
    ├─ SCHEMA 生成: session≥5 → TF-IDF 摘要
    ├─ InfinityDB 同步: brain.graph(元数据) + brain.vec + brain.hnsw
    └─ 增量删除 L2 (written_ids 追踪)

Recall(检索)
    ├─ Path1: HNSW 向量搜索(语义查全)
    ├─ Path2: 关键词搜索(字面查准)
    ├─ merge_seeds(): 两者都命中得双倍权重
    ├─ spreading_activation: 图扩散
    ├─ dynamic_priority: 按 priority 加权
    └─ tier/type 过滤 → top-k 返回

数据存储(方案A:单一 InfinityDB)

memory/layers/infinitydb/
    brain.graph.json   # 神经元元数据 + 邻接关系(唯一数据源)
    brain.vec.fvec     # 二进制向量存储
    brain.vec.idx     # 向量 ID 索引
    brain.hnsw         # HNSW 索引(pickle)

brain.graph.json 结构:
{
  "neurons": {
    "neuron_id": {
      "content": "记忆内容...",
      "memory_type": "task",
      "priority": 3,
      "tier": "warm",
      "timestamp": "2026-04-26T...",
      "neighbors": { "other_id": 0.5 }
    }
  },
  "config": { "recall": { "max_spread_hops": 3, ... } }
}

环境准备

1. 安装 Ollama

# macOS/Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows: https://ollama.com/download

2. 拉取向量模型

ollama pull bge-m3
ollama list  # 验证

3. 启动服务

ollama serve
curl http://localhost:11434/api/tags  # 验证

4. 安装 Python 依赖

pip install --break-system-packages httpx
# 无其他外部依赖(纯 Python 标准库实现)

快速使用

# 运行全部
python main.py run --all

# 只运行某一层
python main.py run --layer l1
python main.py run --layer l2
python main.py run --layer l3

# 搜索记忆
python main.py search "查询内容"
python src/recall/recall.py "查询内容" --top-k 5

# 按类型/层级过滤
python src/recall/recall.py "安装" --tier warm --type task

# 查看状态
python main.py stats

Recall API

from src.recall.recall import fusion_recall

# 基础检索(向量 + 关键词并行搜索)
results = fusion_recall("昨天安装了什么", top_k=10)

# 带过滤
results = fusion_recall(
    query="项目配置",
    top_k=10,
    tier="warm",           # 按层级过滤
    memory_type="task",    # 按类型过滤
    min_score=0.3          # 最低激活分
)

# 返回格式
# [{
#   "id": "neuron_id",
#   "content": "记忆内容",
#   "memory_type": "task",
#   "priority": 3,
#   "tier": "warm",
#   "activation_score": 0.6854
# }, ...]

项目结构

vector-mind-map-fusion/
├── SKILL.md               # 本文件(OpenClaw skill 元数据)
├── README.md              # 完整文档
├── requirements.txt       # Python 依赖
├── main.py                # 项目入口
├── src/
│   ├── l1/                # L1 提取层
│   │   ├── l1_cron.py
│   │   ├── scan_sessions_incremental.py
│   │   └── l1_classifier.py
│   ├── l2/                # L2 整理层
│   │   ├── l2_cron.py
│   │   └── l2_daily.py
│   ├── l3/                # L3 检索层
│   │   ├── l3_cron.py
│   │   └── l3_biweekly_consolidate.py
│   └── recall/            # 召回工具
│       ├── recall.py          # SpreadingActivationRecall(并行搜索)
│       └── infinitydb_lite.py # InfinityDB 单一数据源
└── memory/
    └── layers/
        ├── l1a/           # L1 原始提取
        ├── l2a/           # L2 去重前
        ├── l2/            # L2 整理后
        └── infinitydb/     # L3 永久记忆(单一数据源)

触发条件

用户意图对应层说明
"记住 XXX"、"存入记忆"L1立即提取
"之前有没有"、"我的记忆里"Recall语义 + 关键词并行查询
"整理一下"、"归类"L2结构化整理
"搜索记忆"、"语义搜索"Recall向量召回
每日定时L1+L2 (00:30 CST)增量扫描
每两天定时L3 (03:00 CST)归档整理

质量保证

保证实现
防断裂50字 overlap、atomic write、byte offset
防丢失tmp 保护、written_ids 追踪、graph_written 标记
防质量下降denoise→quality→classify 顺序、零向量过滤
防关系错乱session graph + transitive closure
防索引混乱四级去重、content_hash_index 隔离
单一数据源InfinityDB 唯一写入,无双写同步问题

性能指标

操作速度
L1 Scanner~87,000 条/秒
L1 Classifier~32,000 条/秒
L2 处理~14,500 条/秒
L3 InfinityDB 写入~100 条/秒
Vector Search (HNSW)O(log n)
Adjacency BFS<1ms/hop
Combined Recall~59 QPS

版本历史

版本更新内容
1.0.5初始版本
1.0.7回滚 L3 dedup,L3 为纯写代理
1.1.0方案A:InfinityDB 单一数据源 + 并行搜索(HNSW+关键词)
1.1.1更新文档:SKILL.md 与 README.md 同步

Comments

Loading comments...