Claude Code API Optimizer Skill
PassAudited by ClawScan on May 10, 2026.
Overview
This instruction-only token optimizer is coherent and purpose-aligned, but users should notice that it can persist conversation memories and use a secondary model to process conversation content.
This appears safe to use for token optimization, provided you are comfortable with persistent memory files and any secondary model/provider used for extraction. Before installing, review the full SKILL.md because the supplied artifact text was truncated, decide where memories should live, and avoid saving secrets or highly sensitive project data.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Over-compression or truncation could make the assistant miss important details, especially in accuracy-critical tasks.
The skill intentionally changes what is sent to model/API calls to save tokens. This is central to the stated purpose, but users should be aware it can remove context.
Estimate before every API call... If estimated tokens exceed the budget: Summarize or truncate the longest sections first.
Use explicit token budgets, review compressed context for important tasks, and disable or relax compression when completeness matters more than cost.
Users cannot verify the additional referenced guidance from the provided package.
The manifest contains only SKILL.md, so these referenced documentation files were not available in the supplied artifacts. They are not executable helpers, so this is a completeness note rather than a security concern.
See [references/token-formula.md](references/token-formula.md)... See [references/memory-extraction-pattern.md](references/memory-extraction-pattern.md)
Review the full published package if available, and do not assume absent reference files add extra safety controls.
Personal preferences, project details, deadlines, or links may be saved and influence later work.
The skill stores conversation-derived user, feedback, project, and reference information for later reuse. This is purpose-aligned, but it creates persistent context that can become stale, sensitive, or over-trusted.
extract and persist key information into structured memory files. On subsequent turns, load only the memory index
Review memory files periodically, avoid storing secrets, and add your own retention/deletion rules if using this skill.
Conversation content used for memory extraction may be processed by a different model provider than the main assistant.
The memory extraction workflow may send new conversation messages to another model/provider. This is disclosed and purpose-aligned, but it crosses a data-processing boundary.
Use a lightweight secondary model (Haiku, GPT-4o-mini, Gemini Flash) as the memory extraction agent.
Confirm which secondary model/provider will be used and avoid running memory extraction over confidential content unless that provider is acceptable.
