Token Optimizer
v1.0.0Automatically analyze and reduce OpenClaw token waste through context compression, tool-call deduplication insights, model selection guidance, and session hygiene checks. Use when sessions are nearing context limits, costs are climbing, or you want proactive token optimization before expensive tasks.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's name/description (token optimization for OpenClaw) matches the code behavior: it reads OpenClaw session data, analyzes tokens, compresses contexts, and proposes/apply cleanup actions. However, the skill assumes an 'openclaw' CLI and specific ~/.openclaw paths but the registry metadata lists no required binaries or credentials — that omission is an incoherence that could mislead users about runtime prerequisites.
Instruction Scope
SKILL.md instructs running the included CLI which: reads session transcript files (~/.openclaw/agents/main/sessions/*.jsonl), writes compressed snapshots and a local config under ~/.openclaw/workspace/token-usage/, and can build/apply a cleanup plan. Reading transcripts and tool results is expected for token analysis but contains potentially sensitive content; the '--apply' mode can perform a gateway restart (disruptive operational effect) per operating-notes.
Install Mechanism
No remote downloads or unpacking are used. The repo includes scripts and a small install.sh that only chmods the CLI; package.json maps a bin. No installer downloads arbitrary code from external URLs. This is low install risk, but the lack of an install spec declaring 'openclaw' as a required binary is an operational gap.
Credentials
The skill requests no environment variables or external credentials. It only accesses local OpenClaw configuration and session files under the user's home directory, which is proportionate to its purpose. Be aware those files can include sensitive messages and tool arguments.
Persistence & Privilege
always:false and no special persistence are set (good). The skill can be invoked autonomously by the agent (platform default). The notable privilege is that the tool can execute 'openclaw gateway restart' when user requests '--cleanup --apply' which is a potentially disruptive system operation; it does not try to modify other skills or request permanent elevated credentials.
What to consider before installing
Before installing or running this skill:
- Verify you have the 'openclaw' CLI installed and accessible on PATH. The code calls 'openclaw' via subprocess though the registry metadata lists no required binary.
- Review the included code (scripts/token_optimize, src/*.py) yourself — the tool reads session transcript files (~/.openclaw/agents/main/sessions/*.jsonl) and will process potentially sensitive message contents and tool arguments.
- Run read-only actions first (e.g., --analyze, --health-check, --compress) to inspect output and confirm behavior. Avoid using --cleanup --apply until you’ve reviewed what 'apply' does (operating-notes indicate it currently issues a gateway restart) and tested in a staging environment.
- Confirm where compressed snapshots and config files will be written (~/.openclaw/workspace/token-usage/), and whether that storage is acceptable for your data sensitivity requirements.
- Because the skill source is 'unknown' and not from a verified publisher, prefer running it on a non-production system first and consider searching for the repository or contacting the author to verify provenance.
If you want, I can list the exact lines where the tool invokes the 'openclaw' CLI and where it reads/writes files so you can more easily audit the risky calls.Like a lobster shell, security has layers — review code before you run it.
latest
Token Optimizer
Overview
Use this skill to optimize OpenClaw token usage with a local CLI that performs analysis, compression snapshots, health checks, cleanup planning, and preflight token budgeting.
Quick Start
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --analyze --period 7d
Core Commands
- Enable local optimizer config:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --enable
- Optimization analysis:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --analyze --period 7d
- Force context compression snapshot:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --compress --threshold 0.7 --session agent:main:main
- Session health check:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --health-check --active-minutes 120
- Auto-cleanup planning and apply:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --cleanup
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --cleanup --apply
Preflight Optimization
Use preflight planning before expensive task batches:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize \
--preflight /path/to/actions.json \
--session-limit 180000
actions.json should be a JSON array of planned operations, for example:
[
{"type": "web_search", "query": "..."},
{"type": "web_fetch", "url": "..."},
{"type": "summarize", "target": "youtube"}
]
Output Artifacts
- Compression snapshots:
$OPENCLAW_WORKSPACE/token-usage/compressed/ - Optional JSON output:
--format json --output /path/file.json - Baseline config (from
--enable):$OPENCLAW_WORKSPACE/token-usage/token-optimizer.config.json
Defaults
Default behavior is configured in:
config/defaults.json
Override with:
$OPENCLAW_SKILLS_DIR/token-optimizer/scripts/token-optimize --config /path/custom.json --analyze
Resources
scripts/token_optimize.py: main CLIsrc/optimizer.py: core optimization enginesrc/models.py: model selection logicsrc/compression.py: context compression helperssrc/cleanup.py: session hygiene evaluationreferences/operating-notes.md: implementation details and safe-operating guidance
Validation
python3 $OPENCLAW_SKILLS_DIR/.system/skill-creator/scripts/quick_validate.py \
$OPENCLAW_SKILLS_DIR/token-optimizer
Comments
Loading comments...
