Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Context Window Tracker

v1.4.0

Track and report OpenClaw context window usage with a detailed breakdown of what's consuming tokens. Use when: user asks about context usage, token usage, "h...

0· 183·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 99rebels/context-window-tracker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Context Window Tracker" (99rebels/context-window-tracker) from ClawHub.
Skill page: https://clawhub.ai/99rebels/context-window-tracker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install context-window-tracker

ClawHub CLI

Package manager switcher

npx clawhub@latest install context-window-tracker
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match what the code and SKILL.md do: both describe reading OpenClaw session store and transcripts under ~/.openclaw to compute token usage and present compact/detailed reports. No unrelated binaries, env vars, or cloud credentials are requested.
Instruction Scope
SKILL.md and the script instruct the agent to read session JSON (sessions.json) and transcript JSONL files under ~/.openclaw/agents/<agent>/sessions and to produce reports. That file I/O is exactly what a context tracker needs, but it means the skill reads full transcripts (user and assistant content). SKILL.md also contains rules for adding a contextual one-liner and an opt-in auto-check. A pre-scan flagged a 'system-prompt-override' pattern in SKILL.md — likely a prompt-injection signature; review SKILL.md for any lines that attempt to change system-level instructions or give the skill broad discretion.
Install Mechanism
No install spec and no network/downloads. It's an instruction-only skill with a local Python script. This is low risk from an installer perspective, but you should still inspect/run the included script locally before granting execution rights.
Credentials
The skill requests no env vars or credentials. It does, however, read user session and transcript files (~/.openclaw/agents/...), which will contain message content and token usage. That access is proportional to the stated purpose (computing token usage), but it does imply access to potentially sensitive conversation content — expected, but worth awareness.
Persistence & Privilege
always:false (normal). The opt-in Auto-Check feature persists a .msg-counter.json file (described as 'same directory as SKILL.md') to track message counts. This is reasonable for opt-in behavior, but review where that counter will be written and ensure you only enable auto-check explicitly. The skill does not request system-wide changes or other skills' configs.
Scan Findings in Context
[system-prompt-override] unexpected: A prompt-injection pattern was detected in SKILL.md. The skill otherwise behaves as a local reporter and does not need to override system prompts. This may be a false positive (the SKILL.md contains many rules for how the model should add a one-line guidance), but you should manually inspect SKILL.md for any text that attempts to change the agent's system prompt or asks the agent to ignore higher-level instructions before enabling or running the skill.
Assessment
This skill appears to do exactly what it says: read your OpenClaw session and transcript files and produce a compact or detailed token-usage report. Before installing or running it: 1) Review SKILL.md and scripts/context_report.py yourself (or have a trusted reviewer) to confirm there are no instructions that try to modify system prompts or perform unexpected actions. The pre-scan flagged a possible prompt-injection pattern — check for any lines that tell the model to override system-level guidance. 2) Recognize the script reads ~/.openclaw transcripts (contains conversation content); that is required for its purpose but is sensitive data. 3) Only enable the Auto-Check feature if you want the skill to create/maintain the .msg-counter.json file and run the compact report every N messages — it should remain opt-in (the SKILL.md says never enable automatically). 4) Run the Python script locally once to see its output before granting an agent the ability to execute it autonomously.
!
SKILL.md:73
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk9799998qp5gwg9cff8bcfxbrd85grpy
183downloads
0stars
9versions
Updated 2d ago
v1.4.0
MIT-0

Context Window Tracker

Shows how much context window is left — without opening the terminal.

When to Use

  • "Check my context"
  • "How much context am I using?"
  • "How full is my context window?"
  • "Tokens remaining"
  • "Am I close to the limit?"
  • Any question about context usage

Two Modes

Compact (default)

One line. Glanceable. Use for quick checks.

python3 scripts/context_report.py

Detailed

Full breakdown with per-file system prompt, conversation split, trends, and thinking status. Use when the user asks for specifics.

python3 scripts/context_report.py --detailed

Both modes auto-detect the most recently updated session. Options:

--session <key>    Target a specific session
--agent <name>     Target a specific agent (default: main)
--detailed         Full breakdown instead of compact one-liner

Output Format

Default (quick check)

When the user asks "check context", "how much context", "context window", or similar casual phrases.

Show the unicode bar, percentage, estimated turns remaining, and average tokens per turn:

🟢 [███░░░░░░░░░░░░░░░░░] 15% | ~736 turns left | 427 tokens/turn

Run the compact script (python3 scripts/context_report.py) and extract the bar/percentage. Get avg tokens/turn and turns remaining from the detailed script or session_status. Strip all * characters before sending to Slack (see Slack rendering fix below).

Add a contextual one-liner when context is 75%+ used (see Guidance section). Otherwise, just show the line.

Detailed

When the user explicitly asks "detailed context", "full context check", "context breakdown", or "show me everything":

🟢 [███████████░░░░░░░░░] Context Usage: 113.7K / 202.8K (56%)
────────────────────
Token Breakdown
System Prompt: ~10.2K tokens (5%)
AGENTS.md: ~2.0K tokens
SOUL.md: ~416 tokens
TOOLS.md: ~717 tokens
IDENTITY.md: ~65 tokens
USER.md: ~83 tokens
HEARTBEAT.md: ~48 tokens
BOOTSTRAP.md: ~18 tokens
MEMORY.md: ~2.3K tokens
📦 Framework overhead: ~5.3K (tool schemas, skill list, runtime)
• Conversation: ~103.5K tokens (51%)
• 📊 Total Used: 113.7K (56%)
• Remaining: 89.1K (44%)
────────────────────
Trends
• Avg tokens per turn: ~316 tokens
• ⏳ Estimated turns remaining: ~281
────────────────────
Session Stats
• 📥 Total input: 2.1K | 📤 Total output: 318 | Cache hit rate: 100%
• Thinking: active (35/200 responses)

Run the detailed script and strip all * characters for Slack compatibility.

The bar uses (filled) and (empty) across 20 segments (each = 5%). The bar colour shifts: green under 60%, yellow 60-80%, red over 80%.

Health Indicator

  • 🟢 Under 60% used — plenty of room
  • 🟡 60–80% used — getting tight
  • 🔴 Over 80% used — consider wrapping up

Auto-Check (Opt-In)

The compact report can run automatically every 10 messages. This is disabled by default — the user must explicitly enable it.

To enable, the user must say something like "auto-check my context" or "enable context auto-check". Once enabled:

  1. Maintain a message counter in .msg-counter.json (same directory as SKILL.md)
  2. On every user message, increment the counter
  3. If the count is a multiple of 10, run the compact script and append the output to your reply
  4. If not, reply normally

The counter survives compaction. If the file is missing, create it starting at 0:

{"count": 0}

To disable, the user can say "disable context auto-check" — delete the counter file and stop checking.

Important: Never enable this automatically. Only enable when the user explicitly asks.

Guidance

The script outputs raw data. The LLM adds a contextual one-liner based on the conversation.

When to add guidance:

  • Only when context is 75%+ used
  • Skip for fresh sessions — no need for advice when there's plenty of room
  • Skip if the user just asked for a raw number — give them the number
  • Applies to both compact and detailed modes

Slack rendering fix: The script uses *text* for emphasis, which Slack interprets as italics and can break rendering of the detailed output (long messages with many italics markers fail to display). When the channel is Slack:

  • Strip all * characters from the script output before displaying
  • Alternatively, use the compact mode (one-liner) which doesn't have this issue

How to write it: One line, specific to the current task. For compact mode, append after the one-liner. For detailed mode, append after the final divider.

Examples:

  • "Room to finish testing the skill and push to ClawHub, but not start a new one from scratch."
  • "Tight — let's wrap up the config changes and commit. Anything else should go in /new."
  • "Plenty of room. Keep going."
  • Compact: append as | Tight — wrap up and commit, start fresh for anything new.

Rules:

  • One line max. No paragraphs.
  • Reference the actual task, not generic categories.
  • Don't prescribe what the user should do — describe what fits.
  • If you're not sure what the task is, fall back to a generic note or skip it.

What's Exact vs Estimated

✅ Exact (from provider):
  • Total tokens used (from transcript)
  • Context window limit (from session store)
  • Cache hit rate

⚠ Estimated:
  • Per-file system prompt breakdown (chars ÷ 4)
  • Turns remaining (extrapolated from recent growth rate)
  • Thinking token count (bundled by provider, not separately reported)

Notes

  • Script reads the transcript (.jsonl) as source of truth — the session store can lag behind by thousands of tokens
  • If the session store doesn't provide a context window limit (some thread sessions), it shows tokens used without a percentage
  • See references/data-sources.md for file paths
  • See references/thinking-tokens.md for how reasoning tokens affect counts

Comments

Loading comments...