Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Context Compactor

Token-based context compaction for local models (MLX, llama.cpp, Ollama) that don't report context limits.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.2k · 2 current installs · 2 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (context compaction for local LLMs) align with the included code and runtime instructions. The plugin inspects OpenClaw config, reads session transcripts, estimates tokens, summarizes old messages, and prepends a summary — all consistent with the stated functionality.
Instruction Scope
SKILL.md and the code limit actions to reading openclaw.json (with an explicit prompt in the CLI), reading session transcripts (when provided by the runtime), writing plugin files into ~/.openclaw/extensions/, and calling the local OpenClaw LLM runtime for summaries. There are no instructions to read unrelated system files, access unrelated env vars, or transmit data to external endpoints.
Install Mechanism
There is no formal install spec in the registry, but SKILL.md instructs using `npx jasper-context-compactor setup`. Running via npx will fetch the package from npm, which is normal but carries the usual supply-chain risk (downloading and executing remote code). The included CLI copies files into ~/.openclaw/extensions — expected for a plugin installer. Recommend verifying the npm package and GitHub repository before running npx.
Credentials
The skill requests no environment variables, no credentials, and no config paths beyond user OpenClaw config paths under the user's home directory. The code only touches ~/.openclaw/, consistent with its purpose.
Persistence & Privilege
The skill does not request always:true or system-wide elevated privileges. The installer writes plugin files under the user's home (~/.openclaw/extensions/context-compactor) and updates openclaw.json — reasonable for a user-installed plugin. It does not modify other skills' configs or system settings beyond the user's OpenClaw config.
Assessment
This plugin is internally coherent and appears to do what it claims: estimate tokens, summarize older messages, and inject a compacted summary for local models. Before installing: - Verify the package source: check the npm page and GitHub repo referenced in the README (ensure they match the publisher you trust). npx will fetch remote code, so confirm the package contents/maintainer. - Backup your openclaw.json (the CLI already does this, but you can manually back up before running commands). - Review the included files (index.ts, cli.js) locally if possible rather than running npx directly, or install from a tarball you inspected. - If you use sensitive local providers, confirm the plugin's modelFilter setting so it only runs where you want it. If you want higher assurance, run the CLI from a checked-out copy of this repository instead of via npx so you control the exact code being executed.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.3.8
Download zip
latestvk975ckc4zj2g8ezfyxn9tfy3v180zxeg

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Context Compactor

Automatic context compaction for OpenClaw when using local models that don't properly report token limits or context overflow errors.

The Problem

Cloud APIs (Anthropic, OpenAI) report context overflow errors, allowing OpenClaw's built-in compaction to trigger. Local models (MLX, llama.cpp, Ollama) often:

  • Silently truncate context
  • Return garbage when context is exceeded
  • Don't report accurate token counts

This leaves you with broken conversations when context gets too long.

The Solution

Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.

How It Works

┌─────────────────────────────────────────────────────────────┐
│  1. Message arrives                                         │
│  2. before_agent_start hook fires                           │
│  3. Plugin estimates total context tokens                   │
│  4. If over maxTokens:                                      │
│     a. Split into "old" and "recent" messages              │
│     b. Summarize old messages (LLM or fallback)            │
│     c. Inject summary as compacted context                 │
│  5. Agent sees: summary + recent + new message             │
└─────────────────────────────────────────────────────────────┘

Installation

# One command setup (recommended)
npx jasper-context-compactor setup

# Restart gateway
openclaw gateway restart

The setup command automatically:

  • Copies plugin files to ~/.openclaw/extensions/context-compactor/
  • Adds plugin config to openclaw.json with sensible defaults

Configuration

Add to openclaw.json:

{
  "plugins": {
    "entries": {
      "context-compactor": {
        "enabled": true,
        "config": {
          "maxTokens": 8000,
          "keepRecentTokens": 2000,
          "summaryMaxTokens": 1000,
          "charsPerToken": 4
        }
      }
    }
  }
}

Options

OptionDefaultDescription
enabledtrueEnable/disable the plugin
maxTokens8000Max context tokens before compaction
keepRecentTokens2000Tokens to preserve from recent messages
summaryMaxTokens1000Max tokens for the summary
charsPerToken4Token estimation ratio
summaryModel(session model)Model to use for summarization

Tuning for Your Model

MLX (8K context models):

{
  "maxTokens": 6000,
  "keepRecentTokens": 1500,
  "charsPerToken": 4
}

Larger context (32K models):

{
  "maxTokens": 28000,
  "keepRecentTokens": 4000,
  "charsPerToken": 4
}

Small context (4K models):

{
  "maxTokens": 3000,
  "keepRecentTokens": 800,
  "charsPerToken": 4
}

Commands

/compact-now

Force clear the summary cache and trigger fresh compaction on next message.

/compact-now

/context-stats

Show current context token usage and whether compaction would trigger.

/context-stats

Output:

📊 Context Stats

Messages: 47 total
- User: 23
- Assistant: 24
- System: 0

Estimated Tokens: ~6,234
Limit: 8,000
Usage: 77.9%

✅ Within limits

How Summarization Works

When compaction triggers:

  1. Split messages into "old" (to summarize) and "recent" (to keep)
  2. Generate summary using the session model (or configured summaryModel)
  3. Cache the summary to avoid regenerating for the same content
  4. Inject context with the summary prepended

If the LLM runtime isn't available (e.g., during startup), a fallback truncation-based summary is used.

Differences from Built-in Compaction

FeatureBuilt-inContext Compactor
TriggerModel reports overflowToken estimate threshold
Works with local models❌ (need overflow error)
Persists to transcript❌ (session-only)
SummarizationPi runtimePlugin LLM call

Context Compactor is complementary — it catches cases before they hit the model's hard limit.

Troubleshooting

Summary quality is poor:

  • Try a better summaryModel
  • Increase summaryMaxTokens
  • The fallback truncation is used if LLM runtime isn't available

Compaction triggers too often:

  • Increase maxTokens
  • Decrease keepRecentTokens (keeps less, summarizes earlier)

Not compacting when expected:

  • Check /context-stats to see current usage
  • Verify enabled: true in config
  • Check logs for [context-compactor] messages

Characters per token wrong:

  • Default of 4 works for English
  • Try 3 for CJK languages
  • Try 5 for highly technical content

Logs

Enable debug logging:

{
  "plugins": {
    "entries": {
      "context-compactor": {
        "config": {
          "logLevel": "debug"
        }
      }
    }
  }
}

Look for:

  • [context-compactor] Current context: ~XXXX tokens
  • [context-compactor] Compacted X messages → summary

Links

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…