Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Hawk Bridge

OpenClaw Hook Bridge + context-hawk Python Memory Engine. Auto-capture memories on every reply, auto-inject relevant memories before each response. Supports...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 6 · 0 current installs · 0 all-time installs
byGao.QiLin@relunctance
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (hawk Bridge: auto-capture + auto-recall memories) match the code: TypeScript hooks trigger on agent events and call a Python memory engine. The dependencies and env vars mentioned (OLLAMA_BASE_URL, JINA_API_KEY, OPENAI_API_KEY, MINIMAX_API_KEY) are consistent with supporting multiple embedding/LLM backends. Minor mismatch: metadata declared 'instruction-only' but repository includes many code files and an installer — not purely instruction-only.
!
Instruction Scope
Runtime hooks automatically capture every outbound message and call a Python extractor to persist memories to LanceDB; before replies they auto-recall and inject memories. The hooks read the user's OpenClaw config (~/.openclaw/openclaw.json) to auto-detect providers and will use any available API keys found there or in environment variables. This means user conversation content will be stored locally and (when configured) sent to external embedding/LLM services — behavior that fits the described purpose but has privacy implications and should be explicit to users. The capture handler spawns a Python -c subprocess that embeds the conversation into generated code — this is brittle and could fail with unexpected message content.
!
Install Mechanism
There is no formal registry install spec, but an included install.sh performs system package installation (sudo apt/dnf/pacman/apk/zypper), runs npm/pip installs, executes remote installers via curl|sh (e.g., Ollama), downloads models (ollama pull), and clones an additional repo (context-hawk) into ~/.openclaw/workspace. The script uses git@github.com (SSH) for cloning which may fail for many users and is an odd choice (HTTPS would be more typical). Running remote install scripts and model downloads as root or with sudo is a supply-chain risk.
Credentials
The skill declares no required env vars but reads and honors many optional env vars and existing OpenClaw auth configuration (MINIMAX_API_KEY, OPENAI_API_KEY, JINA_API_KEY, OLLAMA_BASE_URL, etc.). These are proportionate to the declared purpose (embedding/LLM backends). Important: the skill will read ~/.openclaw/openclaw.json to auto-detect provider keys and may use those keys to call external services — this is expected for integration but is a sensitive action the user should be aware of.
Persistence & Privilege
always:false (good). The skill installs files under the user's home (~/.openclaw/workspace, ~/.openclaw/hawk) and seeds memories into a local LanceDB. The installer asks for sudo to install system packages and may install Ollama and models — these actions require elevated privileges and increase the blast radius if the install script is malicious. The skill runs autonomously on hook events by design, which combined with reading stored credentials and auto-capture raises the impact of a compromised skill.
What to consider before installing
This skill implements automatic memory capture/injection and needs careful review before installing. Highlights and recommended precautions: - Data flows: by design it stores conversations in a local LanceDB and, when embedding/LLM providers are configured (OpenAI, Jina, Minimax, Ollama, etc.), it will send text to those external services. If you do not want user messages sent off-host, keep the skill in BM25-only/local-embed mode or do not provide remote API keys. - Installer risk: the provided install.sh performs system package installs with sudo, runs curl | sh for third-party installers (Ollama/NodeSource), and downloads models. Running it as-is is a supply-chain risk. Prefer manual review or manual installation steps, and avoid piping remote scripts directly into bash. - Source cloning: install uses git@github.com (SSH) to clone a second repository (context-hawk) into ~/.openclaw/workspace; this pulls more code at install time. Inspect that repository before trusting it. Consider replacing SSH URLs with HTTPS if cloning from public repos. - Credentials: the hook auto-reads ~/.openclaw/openclaw.json and will reuse configured provider API keys. If you have sensitive keys in OpenClaw config you do not want reused, remove or segregate them before enabling the skill. - Least privilege testing: install and test inside an isolated environment (VM or disposable container) first. Disable auto-capture or set capture.enabled=false in config until you confirm behavior. Review the Python 'context-hawk' code (the extractor and any networking) before enabling production use. - If you accept install risks: prefer manual installation and review of install.sh; audit any remote scripts it downloads; seed memory and model downloads can be large — ensure disk/ bandwidth policies are acceptable. I cannot prove malicious intent from the files provided, but the combination of remote installers, sudo operations, on-install cloning of another repo, and automatic capture of conversation content makes this package higher-risk than a simple hook. Proceed only after code and installer review or run inside a sandbox.
src/hooks/hawk-capture/handler.ts:106
Shell command execution detected (child_process).
src/embeddings.ts:41
Environment variable access combined with network send.
src/retriever.ts:144
Environment variable access combined with network send.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97epv86pqr0y110wjzqxdbxw9841rrq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🦅 Clawdis
Any binnode, python3.12

SKILL.md

hawk-bridge — OpenClaw 记忆系统 Skill

OpenClaw Hook Bridge + context-hawk Python Memory Engine 单一 Skill,同时解决:自动记忆捕获 + 自动记忆检索 + 四层衰减 + 向量搜索 + Markdown兼容


核心能力

能力说明
autoCapture Hook每次回复后,自动用 LLM 提取对话内容 → 存入 LanceDB
autoRecall Hook每次回复前,自动检索相关记忆 → 注入上下文
四层记忆衰减Working → Short → Long → Archive,自动淘汰低价值记忆
混合检索向量 + BM25 + RRF融合 + 噪声过滤 + 交叉编码重排
Markdown兼容一键导入用户已有 .md 记忆文件
零配置自动读取 OpenClaw 配置(minimax等),无需额外 API Key

架构

OpenClaw Gateway (TypeScript Hooks)
    │
    ├── agent:bootstrap → hawk-recall hook
    │       → HybridRetriever (向量 + BM25)
    │       → 检索 LanceDB
    │       → 记忆注入上下文 🦅
    │
    └── message:sent → hawk-capture hook
            → Python LLM 智能提取(fact/preference/decision/entity/other)
            → 存入 LanceDB
            → Governance 日志

Python Core (hawk_memory/)
    ├── memory.py       — MemoryManager 四层衰减
    ├── compressor.py   — ContextCompressor 上下文压缩
    ├── self_improving.py — 自我反思学习
    ├── extractor.py    — LLM 6类分类提取
    ├── governance.py   — 系统巡检指标
    ├── vector_retriever.py — 向量检索
    └── markdown_importer.py — .md 文件导入

安装

方式1:OpenClaw Skill(推荐)

openclaw skills install https://github.com/relunctance/hawk-bridge

方式2:手动安装

git clone git@github.com:relunctance/hawk-bridge.git /path/to/hawk-bridge
cd /path/to/hawk-bridge
npm install
pip install lancedb openai rank_bm25

注册到 openclaw.json

{
  "plugins": {
    "load": {
      "paths": ["/absolute/path/to/hawk-bridge"]
    },
    "allow": ["hawk-bridge"]
  }
}

自动配置(零额外Key)

embedding + LLM 默认使用 OpenClaw 已配置的 provider(minimax 等):

配置项来源说明
embedding provideropenclaw.json models.providers自动检测
LLM provideropenclaw.json models.providers自动检测
API Keyopenclaw.json auth.profiles自动透传

环境变量覆盖(可选):

export MINIMAX_API_KEY="your-key"        # Minimax API Key
export MINIMAX_BASE_URL="https://..."     # 自定义端点
export MINIMAX_MODEL="MiniMax-M2.7"      # 指定模型
export OLLAMA_BASE_URL="http://localhost:11434"  # Ollama本地(免费)
export LLM_PROVIDER="groq"               # 切换LLM后端

配置项(openclaw.json)

{
  "plugins": {
    "entries": {
      "hawk-bridge": {
        "enabled": true,
        "config": {
          "embedding": {
            "provider": "openclaw",
            "apiKey": "",
            "model": "embedding-minimax",
            "baseURL": "",
            "dimensions": 1536
          },
          "recall": {
            "topK": 5,
            "minScore": 0.6,
            "injectEmoji": "🦅"
          },
          "capture": {
            "enabled": true,
            "maxChunks": 3,
            "importanceThreshold": 0.5
          },
          "python": {
            "pythonPath": "python3.12",
            "hawkDir": "~/.openclaw/hawk"
          }
        }
      }
    }
  }
}

Python API

四层记忆 + 衰减

from hawk_memory import MemoryManager

mm = MemoryManager()
mm.store("用户偏好:喜欢简洁的回复风格", category="preference")
results = mm.recall("用户的沟通风格是什么")
print(results)

向量检索

from hawk_memory.vector_retriever import VectorRetriever

retriever = VectorRetriever(top_k=5)
chunks = retriever.recall("老板之前部署过哪些服务")
print(retriever.format_for_context(chunks))

Markdown 导入

from hawk_memory.markdown_importer import MarkdownImporter

importer = MarkdownImporter(memory_dir="~/.openclaw/memory")
result = importer.import_all()  # 增量导入,已导入的打标签跳过
print(f"导入 {result['files']} 个文件, {result['chunks']} 个块")

上下文压缩

from hawk_memory.compressor import ContextCompressor

compressor = ContextCompressor(max_tokens=4000)
compressed = compressor.compress(conversation_history)

自我反思

from hawk_memory.self_improving import SelfImproving

si = SelfImproving()
si.learn_from_error("记忆提取返回空", context={"query": "..."})
stats = si.get_stats()

CLI 工具

# 导入 Markdown 记忆文件
python3.12 -m hawk_memory.markdown_importer --dry-run  # 预览
python3.12 -m hawk_memory.markdown_importer            # 实际导入

# 记忆提取(LLM 6类分类)
echo "对话内容..." | python3.12 -m hawk_memory.extractor --provider openclaw

# 查看记忆数量
python3.12 -c "from hawk_memory import MemoryManager; print(MemoryManager().count())"

# 查看治理指标
python3.12 -c "from hawk_memory.governance import Governance; print(Governance().get_stats(24))"

与 context-hawk 的关系

context-hawk 是本 Skill 的底层 Python 引擎,已整合进 python/hawk_memory/ 目录。

如果你之前装了 context-hawk 作为独立 Skill,可以卸载:

openclaw skills uninstall context-hawk

本 Skill 是 context-hawk 的超集,提供完整功能和自动 Hook。


目录结构

hawk-bridge/
├── SKILL.md                    ← 本文档
├── openclaw.plugin.json        ← 插件元数据 + 配置schema
├── package.json
├── src/
│   ├── index.ts              # 插件入口
│   ├── config.ts             # 自动读取openclaw.json配置
│   ├── lancedb.ts            # LanceDB封装
│   ├── embeddings.ts         # 向量化(多后端)
│   ├── retriever.ts          # 混合检索管线
│   └── hooks/
│       ├── recall.ts         # autoRecall hook
│       └── capture.ts        # autoCapture hook
└── python/
    └── hawk_memory/
        ├── __init__.py
        ├── memory.py         # MemoryManager 四层衰减
        ├── compressor.py     # ContextCompressor
        ├── config.py          # Config
        ├── self_improving.py # 自我反思
        ├── extractor.py      # LLM 6类提取
        ├── governance.py      # 治理指标
        ├── vector_retriever.py # 向量检索
        └── markdown_importer.py # Markdown导入

依赖

npm: npm install

  • @lancedb/lancedb ≥ 0.26.2
  • openai ≥ 6.21.0
  • rank_bm25 ≥ 1.2.0

Python: pip install

  • lancedb
  • openai
  • rank_bm25

Files

21 total
Select a file
Select a file to preview.

Comments

Loading comments…