Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Context Window Optimizer

v1.0.0

Optimize context window usage by summarizing old conversation segments, extracting key facts and decisions to permanent memory, and keeping current context l...

0· 172·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for klemenska/context-window-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Context Window Optimizer" (klemenska/context-window-optimizer) from ClawHub.
Skill page: https://clawhub.ai/klemenska/context-window-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install context-window-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install context-window-optimizer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the included scripts: analyze_context.py, extract_decisions.py, and summarize_session.py perform analysis, extraction, and summarization of session transcripts. Accessing session transcripts and producing summary/memory files is coherent with the stated purpose. However, the skill does not declare the implicit config path it uses (~/.openclaw/agents/main/sessions/) or the fact that it will write persistent memory files in the user's home directory, which should have been surfaced in the metadata.
!
Instruction Scope
SKILL.md plus the scripts instruct the agent to read full session transcripts, extract decisions/key facts, and write/archive them to persistent files (e.g., MEMORY.md, ~/self-improving/memory.md, memory/YYYY-MM-DD.md). There is no built-in redaction or secret-filtering: extract_decisions.py explicitly includes tool call arguments and slices content, which can capture commands, tokens, stack traces, or other sensitive data. The instructions encourage moving conversation content into permanent memory — that centralization is a sensitive operation and is not scoped to exclude secrets or PII.
Install Mechanism
No install spec (instruction-only) and included Python scripts run locally. This is low-risk from an installation origin perspective: nothing is downloaded from remote URLs and no package install is automated.
!
Credentials
The skill requests no environment variables and declares no config paths, yet the scripts directly read ~/.openclaw/agents/main/sessions/*.jsonl and may write to user home paths. That filesystem access is disproportionate to what the metadata advertises (no required config paths). The scripts also parse tool outputs and command arguments (partial command text captured), which increases the chance of harvesting credentials or secrets from session history.
Persistence & Privilege
always:false and the skill does not modify other skills or system settings. However, it is explicitly designed to create and archive persistent memory files (MEMORY.md, memory/YYYY-MM-DD.md, ~/self-improving/memory.md). Persistent storage of extracted content is expected for this use case but raises privacy risk because archived content could include sensitive data and may be accessible to other skills or processes.
What to consider before installing
This skill appears to implement context summarization as advertised, but it reads your OpenClaw session transcripts and writes persistent memory files without redaction. Before installing or enabling it: (1) inspect the scripts locally (they are included) and confirm you are comfortable with reading ~/.openclaw/agents/main/sessions/*.jsonl; (2) run it first in a controlled/test account or sandbox with non-sensitive sessions; (3) add or request secret/PII redaction (credentials, tokens, long stack traces) before archiving; (4) restrict where memory files are written and ensure the directory has appropriate permissions and encryption if needed; (5) consider running commands with --no-llm / dry-run to preview extracts; (6) require the author to add explicit metadata declaring the config paths accessed and an explanation of retention/cleanup policy. If you deal with sensitive data, do not enable automatic or autonomous invocation of this skill until these mitigations are in place.

Like a lobster shell, security has layers — review code before you run it.

latestvk9730fy294msa6mms813zk7a5n83jhbb
172downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Context Window Optimizer

Manage context strategically to prevent token waste and keep conversations effective.

Core Principle

Context is a shared resource. Keep it lean so there's room for actual work.

When to Optimize

  • Conversation exceeds ~50 messages
  • Context feels heavy before a new task
  • Starting a complex multi-step task
  • After significant decisions or completions
  • Explicit request to optimize/compact

Optimization Workflow

Step 1: Assess Context State

Run the analyzer to get context metrics:

python3 scripts/analyze_context.py --session current

This reports:

  • Message count and approximate token count
  • Age of oldest message
  • Density score (signal vs noise)

Step 2: Identify Optimization Targets

Look for:

  • Old已完成 tasks with verbose logs
  • Repeated explanations of same concept
  • Off-topic tangents
  • Raw tool outputs that could be summarized
  • Decisions that should move to permanent memory

Step 3: Extract to Memory

Decisions → MEMORY.md or relevant project file:

## Decisions (from 2026-03-25 session)
- Chose PostgreSQL over MongoDB for project X
- Agreed on 3-day sprint cadence
- User prefers detailed explanations, not summaries

Key facts → appropriate domain/project file:

## Project X Facts
- Tech stack: React + Node + Postgres
- Main user pain point: slow onboarding
- Current velocity: 5 story points/sprint

Patterns → ~/self-improving/memory.md:

## User Preferences
- Always explain the "why" before the "what"
- Prefers bullet points over paragraphs

Step 4: Summarize Dense Segments

For long work sessions, create a summary instead of keeping all details:

## Session Summary: 2026-03-25

### Work Completed
- Set up authentication flow
- Fixed memory leak in worker process
- Designed new API schema

### Decisions Made
- Use JWT over sessions (simpler, scales better)
- Defer caching to v2 (not blocking)

### Open Questions
- Final tech stack for notifications (push vs polling)
- Need user feedback on onboarding flow

### Next Steps
- Implement auth endpoints
- Write tests for worker
- Schedule design review

Step 5: Archive, Don't Delete

Never delete context — archive it:

  • Move summaries to memory/YYYY-MM-DD.md
  • Keep pointers in session for recovery
  • Use [[archived:filename.md]] notation

Context Density Rules

Content TypeAction
Completed tasksSummarize outcome, archive details
DecisionsExtract to MEMORY.md or project file
Key factsExtract to relevant domain/project
Tool logsSummarize if successful, keep if debugging
Repeated conceptsRemove duplicates, keep one canonical
Off-topicSkip or summarize in notes
System promptsNever touch
Skills metadataOnly load relevant ones

Quick Commands

TaskCommand
Analyze current contextpython3 scripts/analyze_context.py --session current
Summarize sessionpython3 scripts/summarize_session.py --session current --output summary.md
Extract decisionspython3 scripts/extract_decisions.py --session current

Files

  • scripts/analyze_context.py — Context metrics and optimization suggestions
  • scripts/summarize_session.py — Create session summary
  • scripts/extract_decisions.py — Pull out decisions and key facts
  • references/patterns.md — Common summarization patterns

Comments

Loading comments...