Context Optimizer

Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
13 · 5k · 30 current installs · 34 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The code, README and examples match the stated purpose (context pruning, archive, query-aware compaction). However the skill metadata/registry claims 'no install spec' and 'no required binaries', while package.json and the SKILL.md indicate Node/npm and npm dependencies (tiktoken, @xenova/transformers) are required — a clear mismatch. The skill legitimately needs Node/npm and model dependencies for semantic features, so the registry/manifest omission is incoherent and worth flagging.
!
Instruction Scope
Runtime instructions and code perform expected operations (prune messages, compute tokens, store pruned content to an on-disk archive, run local embeddings). The archive writes message content and metadata to ./context-archive (or configured path) and the logger prints/logs message snippets to console/chat. Those behaviors can leak user content into disk or logs and should be considered when running with sensitive conversations. There are no instructions to read unrelated system files or to exfiltrate data to unexpected remote endpoints in the code, but logging/storage of message content is broad and may be undesirable in some environments.
!
Install Mechanism
There is no dangerous custom download URL or obfuscated installer; installation is via npm (package.json lists tiktoken and @xenova/transformers). That implies network downloads from npm and model downloads by @xenova at runtime (INSTALL.md even warns about an ~80MB model download). The SKILL.md metadata includes an install step that runs 'cd ~/.clawdbot/skills/context-pruner && npm install' — which assumes npm and a user path, but the top-level registry incorrectly states 'no install spec' and 'no required binaries'. This mismatch and the runtime model downloads increase risk and require user attention.
Credentials
The skill does not request credentials or environment variables and the code does not reference secrets or cloud credentials. It only needs filesystem write access for the archive and permission to install npm packages (Node/npm). The lack of credential requests is proportionate to the stated functionality.
Persistence & Privilege
The skill does persist data to disk (archive entries and an index) under a configurable path (default './context-archive') and may create directories under a user path if instructions are followed. It is not always-enabled and does not request elevated system privileges, nor does it modify other skills. Persisting message content to disk and logging it to chat/console is a persistence/privilege consideration the user should be aware of.
What to consider before installing
What to consider before installing: - Inconsistency: the registry/manifest claims no install or required binaries, but the package includes package.json and SKILL.md that require Node/npm and npm dependencies. Treat the skill as a Node package that needs npm install and Node 18+. - Network downloads: installing dependencies and running the embedding pipeline will download npm packages and model files (the transformers/embedder may fetch ~80MB models). If you need an air-gapped or private environment, do not install or prepare an approved mirror. - Data persistence & logging: the skill writes pruned content and metadata to an on-disk archive (default ./context-archive) and prints message snippets to logs/chat. Sensitive conversation content could be stored or appear in logs — review/override archivePath, disable chat logging (logToChat=false or set chatLogLevel='none'), and audit the onLog handler before use. - Verify origin: the homepage points to a GitHub repo but the skill source is listed as 'unknown'. Inspect the repository code locally, verify the publisher identity, and run tests in an isolated/sandbox environment (container or VM) before adding to production agents. - Audit dependencies: run 'npm audit' and review package.json. If you only need non-semantic strategies, consider disabling dependencies (tiktoken / @xenova/transformers) or using the lightweight path if available. If you decide to try it: run it in a sandbox, set archivePath to a controlled location, turn off chat logging, and inspect the code (especially any custom onLog handlers) before enabling in long-lived agents.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97dyp3e6vyrte261ewbkbdmxh809t5w

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis

SKILL.md

Context Pruner

Advanced context management optimized for DeepSeek's 64k context window. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information.

Key Features

  • DeepSeek-optimized: Specifically tuned for 64k context window
  • Adaptive pruning: Multiple strategies based on context usage
  • Semantic deduplication: Removes redundant information
  • Priority-aware: Preserves high-value messages
  • Token-efficient: Minimizes token overhead
  • Real-time monitoring: Continuous context health tracking

Quick Start

Auto-compaction with dynamic context:

import { createContextPruner } from './lib/index.js';

const pruner = createContextPruner({
  contextLimit: 64000, // DeepSeek's limit
  autoCompact: true,    // Enable automatic compaction
  dynamicContext: true, // Enable dynamic relevance-based context
  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  queryAwareCompaction: true, // Compact based on current query relevance
});

await pruner.initialize();

// Process messages with auto-compaction and dynamic context
const processed = await pruner.processMessages(messages, currentQuery);

// Get context health status
const status = pruner.getStatus();
console.log(`Context health: ${status.health}, Relevance scores: ${status.relevanceScores}`);

// Manual compaction when needed
const compacted = await pruner.autoCompact(messages, currentQuery);

Archive Retrieval (Hierarchical Memory):

// When something isn't in current context, search archive
const archiveResult = await pruner.retrieveFromArchive('query about previous conversation', {
  maxContextTokens: 1000,
  minRelevance: 0.4,
});

if (archiveResult.found) {
  // Add relevant snippets to current context
  const archiveContext = archiveResult.snippets.join('\n\n');
  // Use archiveContext in your prompt
  console.log(`Found ${archiveResult.sources.length} relevant sources`);
  console.log(`Retrieved ${archiveResult.totalTokens} tokens from archive`);
}

Auto-Compaction Strategies

  1. Semantic Compaction: Merges similar messages instead of removing them
  2. Temporal Compaction: Summarizes older conversations by time windows
  3. Extractive Compaction: Extracts key information from verbose messages
  4. Adaptive Compaction: Chooses best strategy based on message characteristics
  5. Dynamic Context: Filters messages based on relevance to current query

Dynamic Context Management

  • Query-aware Relevance: Scores messages based on similarity to current query
  • Relevance Decay: Relevance scores decay over time for older conversations
  • Adaptive Filtering: Automatically filters low-relevance messages
  • Priority Integration: Combines message priority with semantic relevance

Hierarchical Memory System

The context archive provides a RAM vs Storage approach:

  • Current Context (RAM): Limited (64k tokens), fast access, auto-compacted
  • Archive (Storage): Larger (100MB), slower but searchable
  • Smart Retrieval: When information isn't in current context, efficiently search archive
  • Selective Loading: Extract only relevant snippets, not entire documents
  • Automatic Storage: Compacted content automatically stored in archive

Configuration

{
  contextLimit: 64000, // DeepSeek's context window
  autoCompact: true, // Enable automatic compaction
  compactThreshold: 0.75, // Start compacting at 75% usage
  aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90%
  
  dynamicContext: true, // Enable dynamic context management
  relevanceDecay: 0.95, // Relevance decays 5% per time step
  minRelevanceScore: 0.3, // Minimum relevance to keep
  queryAwareCompaction: true, // Compact based on current query relevance
  
  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  preserveRecent: 10, // Always keep last N messages
  preserveSystem: true, // Always keep system messages
  minSimilarity: 0.85, // Semantic similarity threshold
  
  // Archive settings
  enableArchive: true, // Enable hierarchical memory system
  archivePath: './context-archive',
  archiveSearchLimit: 10,
  archiveMaxSize: 100 * 1024 * 1024, // 100MB
  archiveIndexing: true,
  
  // Chat logging
  logToChat: true, // Log optimization events to chat
  chatLogLevel: 'brief', // 'brief', 'detailed', or 'none'
  chatLogFormat: '📊 {action}: {details}', // Format for chat messages
  
  // Performance
  batchSize: 5, // Messages to process in batch
  maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass
}

Chat Logging

The context optimizer can log events directly to chat:

// Example chat log messages:
// 📊 Context optimized: Compacted 15 messages → 8 (47% reduction)
// 📊 Archive search: Found 3 relevant snippets (42% similarity)
// 📊 Dynamic context: Filtered 12 low-relevance messages

// Configure logging:
const pruner = createContextPruner({
  logToChat: true,
  chatLogLevel: 'brief', // Options: 'brief', 'detailed', 'none'
  chatLogFormat: '📊 {action}: {details}',
  
  // Custom log handler (optional)
  onLog: (level, message, data) => {
    if (level === 'info' && data.action === 'compaction') {
      // Send to chat
      console.log(`🧠 Context optimized: ${message}`);
    }
  }
});

Integration with Clawdbot

Add to your Clawdbot config:

skills:
  context-pruner:
    enabled: true
    config:
      contextLimit: 64000
      autoPrune: true

The pruner will automatically monitor context usage and apply appropriate pruning strategies to stay within DeepSeek's 64k limit.

Files

10 total
Select a file
Select a file to preview.

Comments

Loading comments…