Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

qui-context-optimizer

v1.0.0

Advanced context management with auto-compaction and dynamic context optimization for use with SkillBoss API Hub LLM services via /v1/pilot. Features intelli...

0· 63·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for quincygunter/qui-context-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "qui-context-optimizer" (quincygunter/qui-context-optimizer) from ClawHub.
Skill page: https://clawhub.ai/quincygunter/qui-context-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: SKILLBOSS_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install qui-context-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install qui-context-optimizer
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The declared purpose (context pruning for LLMs routed via SkillBoss /v1/pilot) matches the code: lib/index.js references the SkillBoss endpoint and uses embeddings/token counting. Required env var SKILLBOSS_API_KEY is appropriate. However, documentation is inconsistent: SUMMARY.md claims 'No external dependencies required' while package.json and SKILL.md list 'tiktoken' and '@xenova/transformers' as required. That mismatch is misleading and should be corrected.
!
Instruction Scope
Runtime instructions and code legitimately perform file I/O (archive files, index.json) and download models for embeddings. The archive.store implementation explicitly logs content snippets to console and persists full content to disk unencrypted; chat-logging code can emit formatted messages and is wired to send logs to chat in real integrations. These behaviors can expose conversation contents to logs, chat streams, or local storage — all within the feature set but sensitive and not fully emphasized in the overview.
Install Mechanism
There is no platform-level install spec in the registry, but SKILL.md includes an install step that runs 'npm install' in ~/.clawdbot/skills/context-pruner. The package depends on public npm packages (tiktoken and @xenova/transformers) which is a normal, traceable install vector (moderate risk). No downloads from obscure URLs or shorteners were found. Be aware that tiktoken and transformer model downloads may build native bits or fetch ~80MB+ models at runtime.
Credentials
Only SKILLBOSS_API_KEY is required and that aligns with routing LLM calls via SkillBoss. However, the skill also requires filesystem write access (archivePath) and will create and update an index and entry files under the configured archive path; those filesystem effects are not declared in the registry metadata and may be surprising. No unrelated credentials are requested.
Persistence & Privilege
always:false and user-invocable true (normal). The skill persists data to its own archive directory and maintains index.json — it does not appear to modify other skills or system-wide agent settings. Its autonomous invocation setting is default and not a standalone concern here.
What to consider before installing
This skill appears to implement a context pruner for SkillBoss and legitimately needs SKILLBOSS_API_KEY and embedding/token libraries, but there are a few red flags to weigh before installing: - Documentation mismatch: SUMMARY.md claims no external deps, but package.json and code require tiktoken and @xenova/transformers. Expect to run 'npm install' and for runtime model downloads. - Local storage: the skill creates an archive directory and index.json and writes pruned conversation content to disk (unencrypted). If your conversations contain sensitive data, choose a secure archivePath or disable archive indexing. - Logging/leakage: archive.store and the chat-logger log message snippets to stdout and are designed to send formatted logs to chat; these logs can reveal user content. Review/override the logger (provide a safe onLog handler) before enabling logToChat. - Dependency/runtime downloads: embedding model downloads (~tens of MB) and native builds (tiktoken) may occur; run install in a controlled environment and review network activity. Recommendations before installing: 1) Inspect package.json and the full lib/index.js to understand exactly what is written to disk and where (archivePath). 2) Run npm install in a sandbox or CI runner first to observe build steps. 3) Set SKILLBOSS_API_KEY to a scoped/test key, not a high-privilege credential, until you confirm behavior. 4) If you need confidentiality, disable archiveIndexing or change archivePath to a secure location and override logging to avoid sending content to chat. 5) If the documentation mismatch bothers you, reach out to the skill author or prefer a version whose README and metadata are consistent.
!
lib/index.js:111
File read combined with network send (possible exfiltration).
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
EnvSKILLBOSS_API_KEY
latestvk976tbmdy3mj2pzhvhx08xd00n85cw74
63downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Context Pruner

Advanced context management optimized for use with SkillBoss API Hub LLM services. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information. Works with any LLM accessed via SkillBoss API Hub's /v1/pilot endpoint.

Key Features

  • SkillBoss API Hub compatible: Works with any LLM routed through /v1/pilot
  • Adaptive pruning: Multiple strategies based on context usage
  • Semantic deduplication: Removes redundant information
  • Priority-aware: Preserves high-value messages
  • Token-efficient: Minimizes token overhead
  • Real-time monitoring: Continuous context health tracking

Quick Start

Auto-compaction with dynamic context:

import { createContextPruner } from './lib/index.js';

const pruner = createContextPruner({
  contextLimit: 64000, // Configurable context limit
  autoCompact: true,    // Enable automatic compaction
  dynamicContext: true, // Enable dynamic relevance-based context
  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  queryAwareCompaction: true, // Compact based on current query relevance
});

await pruner.initialize();

// Process messages with auto-compaction and dynamic context
const processed = await pruner.processMessages(messages, currentQuery);

// Get context health status
const status = pruner.getStatus();
console.log(`Context health: ${status.health}, Relevance scores: ${status.relevanceScores}`);

// Manual compaction when needed
const compacted = await pruner.autoCompact(messages, currentQuery);

Archive Retrieval (Hierarchical Memory):

// When something isn't in current context, search archive
const archiveResult = await pruner.retrieveFromArchive('query about previous conversation', {
  maxContextTokens: 1000,
  minRelevance: 0.4,
});

if (archiveResult.found) {
  // Add relevant snippets to current context
  const archiveContext = archiveResult.snippets.join('\n\n');
  // Use archiveContext in your prompt
  console.log(`Found ${archiveResult.sources.length} relevant sources`);
  console.log(`Retrieved ${archiveResult.totalTokens} tokens from archive`);
}

Auto-Compaction Strategies

  1. Semantic Compaction: Merges similar messages instead of removing them
  2. Temporal Compaction: Summarizes older conversations by time windows
  3. Extractive Compaction: Extracts key information from verbose messages
  4. Adaptive Compaction: Chooses best strategy based on message characteristics
  5. Dynamic Context: Filters messages based on relevance to current query

Dynamic Context Management

  • Query-aware Relevance: Scores messages based on similarity to current query
  • Relevance Decay: Relevance scores decay over time for older conversations
  • Adaptive Filtering: Automatically filters low-relevance messages
  • Priority Integration: Combines message priority with semantic relevance

Hierarchical Memory System

The context archive provides a RAM vs Storage approach:

  • Current Context (RAM): Limited (configurable tokens), fast access, auto-compacted
  • Archive (Storage): Larger (100MB), slower but searchable
  • Smart Retrieval: When information isn't in current context, efficiently search archive
  • Selective Loading: Extract only relevant snippets, not entire documents
  • Automatic Storage: Compacted content automatically stored in archive

Configuration

{
  contextLimit: 64000, // Configurable context window size
  autoCompact: true, // Enable automatic compaction
  compactThreshold: 0.75, // Start compacting at 75% usage
  aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90%

  dynamicContext: true, // Enable dynamic context management
  relevanceDecay: 0.95, // Relevance decays 5% per time step
  minRelevanceScore: 0.3, // Minimum relevance to keep
  queryAwareCompaction: true, // Compact based on current query relevance

  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  preserveRecent: 10, // Always keep last N messages
  preserveSystem: true, // Always keep system messages
  minSimilarity: 0.85, // Semantic similarity threshold

  // Archive settings
  enableArchive: true, // Enable hierarchical memory system
  archivePath: './context-archive',
  archiveSearchLimit: 10,
  archiveMaxSize: 100 * 1024 * 1024, // 100MB
  archiveIndexing: true,

  // Chat logging
  logToChat: true, // Log optimization events to chat
  chatLogLevel: 'brief', // 'brief', 'detailed', or 'none'
  chatLogFormat: '📊 {action}: {details}', // Format for chat messages

  // Performance
  batchSize: 5, // Messages to process in batch
  maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass
}

Chat Logging

The context optimizer can log events directly to chat:

// Example chat log messages:
// 📊 Context optimized: Compacted 15 messages → 8 (47% reduction)
// 📊 Archive search: Found 3 relevant snippets (42% similarity)
// 📊 Dynamic context: Filtered 12 low-relevance messages

// Configure logging:
const pruner = createContextPruner({
  logToChat: true,
  chatLogLevel: 'brief', // Options: 'brief', 'detailed', 'none'
  chatLogFormat: '📊 {action}: {details}',

  // Custom log handler (optional)
  onLog: (level, message, data) => {
    if (level === 'info' && data.action === 'compaction') {
      // Send to chat
      console.log(`🧠 Context optimized: ${message}`);
    }
  }
});

Integration with Clawdbot

Add to your Clawdbot config:

skills:
  context-pruner:
    enabled: true
    config:
      contextLimit: 64000
      autoPrune: true

The pruner will automatically monitor context usage and apply appropriate pruning strategies to stay within the configured context limit. LLM calls are routed through SkillBoss API Hub (POST https://api.heybossai.com/v1/pilot) using your SKILLBOSS_API_KEY.

Comments

Loading comments...