Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Claude Code API Optimizer Skill

Reduce LLM API token consumption by 20-35% through pre-send estimation, memory extraction, and context compression.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 10 · 0 current installs · 0 all-time installs
byPlayda@playdadev
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The high-level purpose (pre-send estimation, memory extraction, context compression) is coherent with reducing LLM token usage. The mechanisms described are reasonable techniques for this goal. However, the skill text expects the agent to persist structured memory files (MEMORY.md and memory/*.md) and to write logs of estimates vs actual usage — the registry metadata declares no required config paths or storage expectations. Also several internal reference files (references/token-formula.md, references/memory-extraction-pattern.md) are referenced but are not present in the package manifest. This discrepancy between claimed zero-dependency/instruction-only and the expectation of persistent files and reference docs is noteworthy.
!
Instruction Scope
SKILL.md instructs the agent to read conversation history, extract non-obvious user/project/reference information, maintain a session cursor, create and update MEMORY.md and memory/topic-name.md files, and log estimate vs actual token usage. Those are file I/O and persistent-state operations even though the skill metadata declared no config paths. The file references and prompt templates referenced (references/*.md) are missing from the provided files. The SKILL.md is also truncated near the end ("Never comp…[truncated]") which leaves behavior unspecified. These gaps could lead an agent to perform unexpected file reads/writes or to ask for unspecified storage locations — both behaviors to surface before install.
Install Mechanism
No install spec and no code files are present, which minimizes supply-chain risk. This instruction-only format is lower risk than downloading and executing remote archives. However, the skill assumes ability to persist files and access a secondary model; those are runtime capabilities rather than install-time artifacts and should be confirmed in the deployment environment.
Credentials
The skill requests no environment variables or credentials in metadata, which aligns with its stated purpose. But runtime instructions expect use of "a lightweight secondary model (Haiku, GPT-4o-mini, Gemini Flash)" and logging of API usage; the skill does not explain which model endpoints, API keys, or storage backends will be used. If the agent implements this, it may need access to credentials or storage locations not declared here. The lack of declared config/credential requirements is an omission to clarify.
!
Persistence & Privilege
The skill describes maintaining a session cursor, creating/updating MEMORY.md and memory/*.md files, and logging estimates vs actual usage — all persistent operations. Yet the skill metadata does not declare config paths or file access requirements. Persistent state combined with autonomous invocation (default model invocation allowed) increases blast radius if misused. The skill's 'always' flag is false (good), but the mismatch between declared capabilities and the instruction's persistence needs is a risk factor.
What to consider before installing
This skill's instructions implement sensible token-reduction techniques, but there are several gaps you should resolve before installing: (1) The SKILL.md tells the agent to create and update memory files (MEMORY.md, memory/*.md) and to log token usage, yet the package metadata declares no storage/config paths — ask the author where files will be stored and whether they respect your workspace boundaries. (2) The SKILL.md references local reference files (references/*.md) that are not included; confirm whether the skill requires additional files or templates. (3) The doc is truncated near the end; request the full spec to ensure no hidden steps. (4) Because the skill persists user/project preferences and reference URLs, verify data-retention and privacy rules (avoid storing sensitive secrets or code excerpts). (5) Test the skill in a restricted/sandbox environment first and require explicit configuration options for storage location, logging behavior, and which secondary models or credentials it may use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk977509w9x6x03rb74rje1xse183zve0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Token Optimizer

Reduce your LLM API costs by 20-35% with three proven mechanisms: pre-send token estimation, structured memory extraction, and context compression. Model-agnostic, zero dependencies.


Mechanism 1 — Pre-Send Token Estimation

Estimate token count before sending a request. If the payload exceeds a threshold, compress or truncate it. Never pay for tokens you could have avoided.

Rules

  1. Estimate before every API call. Use these formulas:

    • Plain text: tokens ≈ character_count / 4
    • JSON / structured data: tokens ≈ character_count / 2
    • Code (mixed): tokens ≈ character_count / 3.5
    • Images / PDFs: tokens ≈ 2000 (flat per asset, regardless of size)
  2. Set a token budget per request. Default threshold: 8 000 tokens. Adjust per use case.

  3. If estimated tokens exceed the budget:

    • Summarize or truncate the longest sections first.
    • Strip intermediate reasoning, keep conclusions only.
    • For JSON: remove null/empty fields, shorten keys if feeding to a model that doesn't need human-readable keys.
    • For code: send only the relevant function/class, not the full file.
  4. Log the estimate vs. actual usage (from the API response) to calibrate over time.

Example

Input: 24,000 characters of plain text
Estimated tokens: 24000 / 4 = 6,000 → under budget, send as-is.

Input: 40,000 characters of JSON
Estimated tokens: 40000 / 2 = 20,000 → over budget.
Action: strip null fields, remove redundant nested objects → 14,000 chars → 7,000 tokens → send.

Reference

See references/token-formula.md for the full formula breakdown with worked examples.


Mechanism 2 — Memory Extraction

Instead of re-reading the entire conversation history every turn, extract and persist key information into structured memory files. On subsequent turns, load only the memory index — not the raw history.

Rules

  1. Use a lightweight secondary model (Haiku, GPT-4o-mini, Gemini Flash) as the memory extraction agent. Never burn expensive model tokens on bookkeeping.

  2. Maintain a session cursor. Track which messages have already been processed. On each extraction pass, only read new messages since the last cursor position.

  3. Limit extraction to 5 rounds max per session. Each round processes a batch of new messages. Stop early if no new information is found.

  4. Parallelize I/O within rounds:

    • Round 1: all reads in parallel (gather raw content).
    • Round 2: all writes in parallel (persist extracted memories).
  5. Structure memory as index + detail files:

    • MEMORY.md — index file, max 200 lines. Contains only pointers: - [topic-name](memory/topic-name.md) — one-line description.
    • memory/topic-name.md — full content for each topic with frontmatter (name, description, type).
  6. Memory types (categorize each entry):

    • user — who the user is, their preferences, expertise level.
    • feedback — corrections and confirmed approaches (what to do / not do).
    • project — current goals, deadlines, decisions, constraints.
    • reference — pointers to external resources (URLs, dashboards, issue trackers).
  7. Do not store what can be derived. No code snippets, no git history, no file paths — these are always available from the source. Store only non-obvious context.

Example — Extraction Prompt

You are a memory extraction agent. Read the following new messages (since cursor position {cursor}).

For each piece of non-obvious information, output a JSON object:
{
  "topic": "short-kebab-case-name",
  "type": "user | feedback | project | reference",
  "description": "one-line summary for the index",
  "content": "full memory content, structured with Why and How-to-apply"
}

Rules:
- Max 5 memories per pass.
- Skip anything derivable from code, git, or existing memory.
- Convert relative dates to absolute (today is {date}).
- If a memory already exists for this topic, output an update, not a duplicate.

Reference

See references/memory-extraction-pattern.md for the full pattern with prompt templates.


Mechanism 3 — Context Compression

As conversations grow, compress older exchanges into dense summaries. Keep only the last N messages in full fidelity. This prevents context windows from filling with stale reasoning.

Rules

  1. Keep the last 6 messages uncompressed (3 user + 3 assistant). These are "fresh" — they contain active context.

  2. Summarize everything older into a single <compressed-context> block at the top of the conversation. Format:

    <compressed-context>
    ## Decisions Made
    - Chose PostgreSQL over MongoDB for the user table (reason: relational queries).
    - API rate limit set to 100 req/min per user.
    
    ## Current State
    - Auth module: complete, merged to main.
    - Payment integration: in progress, blocked on Stripe webhook config.
    
    ## Key Constraints
    - Must ship by 2026-04-15.
    - No breaking changes to public API v2.
    </compressed-context>
    
  3. What to keep in summaries:

    • Decisions and their rationale.
    • Current state of work (done / in-progress / blocked).
    • Constraints and deadlines.
    • User preferences and corrections.
  4. What to discard:

    • Intermediate reasoning ("I considered X but...").
    • Exploratory questions that were already answered.
    • Tool call details (file reads, grep results, build output).
    • Repeated or superseded information.
  5. Trigger compression when the conversation exceeds 60% of the model's context window. Use Mechanism 1's estimation formula to check.

  6. Never compress system prompts or skill instructions. These must remain intact.

Example — Savings Calculation

Before compression:
  42 messages, ~32,000 tokens total.

After compression:
  Compressed block: ~2,000 tokens.
  Last 6 messages: ~4,500 tokens.
  Total: ~6,500 tokens.

  Savings: 32,000 - 6,500 = 25,500 tokens (80% reduction on history).
  Per-request savings (ongoing): ~25,500 tokens × $0.003/1K = $0.077 per request.

Combined Savings Estimate

MechanismTypical SavingsWhen It Hits
Pre-send estimation10-15%Every request with large payloads
Memory extraction5-10%Multi-session workflows
Context compression15-25%Long conversations (>20 messages)
Combined20-35%Sustained usage over a session

These are conservative estimates based on real-world agent workflows. Actual savings depend on conversation length, payload sizes, and how aggressively you compress.


Quick Start

  1. Copy this skill into your agent's skill directory (or paste SKILL.md into your system prompt).
  2. Apply Mechanism 1 immediately — add token estimation before your API calls.
  3. Set up Mechanism 2 if you run multi-turn or multi-session workflows.
  4. Enable Mechanism 3 for any conversation that runs beyond 15-20 messages.

No code to install. No dependencies. Just rules your agent follows.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…