Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Context Compress

v1.0.2

Incrementally summarizes long conversations by pruning, preserving key segments, and using AI to maintain context coherence.

0· 157·0 current·0 all-time
byECsss@olveww-dot

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for olveww-dot/context-compress.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Context Compress" (olveww-dot/context-compress) from ClawHub.
Skill page: https://clawhub.ai/olveww-dot/context-compress
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install context-compress

ClawHub CLI

Package manager switcher

npx clawhub@latest install context-compress
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims incremental summarization using a third‑party LLM (DeepSeek‑V3 / SiliconFlow). The package and runtime invoke a remote summarizer, which makes sense for the stated purpose, but the registry metadata declares no required environment variables or primary credential while the README and scripts expect SILICONFLOW_API_KEY and the code uses a readSecret helper. This mismatch (no declared secret despite needing an API key) is an incoherence that reduces transparency.
!
Instruction Scope
Runtime instructions and scripts will read local conversation/session data and send content to an external API for summarization. The README shows examples where secrets found in context (e.g., SECRET_KEY) might be included in summaries — meaning potentially sensitive conversation contents could be transmitted to SiliconFlow. The SKILL.md also suggests copying the skill into a GitHub sync directory, which risks persisting sensitive summaries to a remote git repo. These behaviors are within the claimed purpose but increase privacy/exfiltration risk and are not called out in registry metadata.
Install Mechanism
There is no formal install spec; an install.sh is provided that copies files into ~/.openclaw/skills (local, low-risk). SKILL.md suggests a one‑liner that downloads a tar.gz from a GitHub repo (a common pattern). The remote download is from GitHub (well-known host) but the repository is third‑party; downloading and executing anything from an untrusted repo has inherent risk. Overall installation is not highly suspicious but lacks provenance/verification.
!
Credentials
The skill effectively requires at least one secret (SILICONFLOW_API_KEY) to enable LLM summarization, and the code imports a readSecret helper (which could access environment or secret stores). However the registry metadata lists no required env vars or primary credential. That omission is a transparency problem. Also the skill will send conversation contents (which may include credentials, tokens, config snippets) to a third party — this is proportionate to using an external LLM but should be explicitly declared and limited.
Persistence & Privilege
The skill does not request 'always: true' and its install.sh only writes its own files into ~/.openclaw/skills. It does not appear to change other skills' configs or system-wide settings. Autonomous invocation is allowed (default) which increases blast radius if the skill exfiltrates data, but that is platform default rather than a unique misconfiguration of this skill.
What to consider before installing
This skill will read your local OpenClaw conversation/session data and (if enabled) send the middle portion of conversations to a third‑party LLM service (SiliconFlow / DeepSeek‑V3) for summarization. Before installing: (1) assume SILICONFLOW_API_KEY is required even though the registry metadata omitted it — do not provide any key you don't trust the recipient with; (2) inspect compressor.ts and the import readSecret to confirm where API keys are read from (env vs secret manager) and that no other unexpected endpoints are contacted; (3) be aware summaries may include secrets found in chat history (the README shows such examples) — redact or remove sensitive messages before compression or disable the remote LLM step; (4) prefer installing only after verifying the upstream GitHub repo and repository owner, and avoid the one‑liner curl|tar unless you trust that repo; (5) if you cannot verify the code path that sends data to SiliconFlow, do not install on systems containing confidential data. Providing the full, untruncated compressor.ts and any network-calling code paths would increase confidence and could change this assessment to benign if they prove limited and transparent.
src/compressor.ts:698
Environment variable access combined with network send.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bavx149aqxsvm3fs8n5z5ax8562gc
157downloads
0stars
3versions
Updated 1w ago
v1.0.2
MIT-0

Context Compress Skill

🛡️ OpenClaw 混合进化方案 — 将 Hermes-agent(100K ⭐)+ Claude Code 核心能力移植到 OpenClaw

防止长对话中思维链断裂的增量摘要工具。

🚀 一键安装

mkdir -p ~/.openclaw/skills && cd ~/.openclaw/skills && curl -fsSL https://github.com/olveww-dot/openclaw-hermes-claude/archive/main.tar.gz | tar xz && cp -r openclaw-hermes-claude-main/skills/context-compress . && rm -rf openclaw-hermes-claude-main && echo "✅ context-compress 安装成功"

触发方式

  • 手动触发: 对我说 "压缩上下文" 或 "compact"
  • 自动触发: 当上下文超过模型 context window 的 50% 时自动压缩

五步算法

  1. Prune — 裁剪旧工具输出(无 LLM 调用,廉价预检)
  2. Head — 保护开头的系统提示和前几轮对话
  3. Tail — 按 token 预算保护最近几轮(~20K tokens)
  4. LLM Summarize — 中间部分调用 DeepSeek-V3 压缩
  5. Iterative — 后续压缩迭代更新摘要

摘要格式

保留以下结构化字段:

  • Active Task — 当前任务(最重要)
  • Goal — 总体目标
  • Completed Actions — 已完成操作(含工具、目标、结果)
  • Active State — 当前工作状态
  • Blocked — 阻塞问题
  • Key Decisions — 关键决策
  • Pending User Asks — 未完成请求
  • Remaining Work — 剩余工作

使用 SiliconFlow API

  • 模型: deepseek-ai/DeepSeek-V3
  • API Base: https://api.siliconflow.cn/v1
  • 通过中转商调用,API Key 存储在环境变量

🧩 配套技能

本 skill 是 OpenClaw 混合进化方案 的一部分:

Hermesagent(100K ⭐)+ Claude Code 核心能力移植到 OpenClaw

Hermes-agent(100K ⭐)+ Claude Code 核心能力移植到 OpenClaw

🔗 GitHub 项目:olveww-dot/openclaw-hermes-claude

完整技能套件(6个):

  • 🛡️ crash-snapshots — 崩溃防护
  • 🧠 auto-distill — T1 自动记忆蒸馏
  • 🎯 coordinator — 指挥官模式
  • 💡 context-compress — 思维链连续性(本文)
  • 🔍 lsp-client — LSP 代码智能
  • 🔄 auto-reflection — 自动反思

输出文件

  • src/compressor.ts — 核心压缩逻辑(TypeScript)

Comments

Loading comments...