Tokenizer
PassAudited by ClawScan on Apr 16, 2026.
Overview
The skill's code, instructions, and runtime requirements are coherent with its stated purpose (auditing token usage and distilling/optimizing context); it does not request unrelated credentials or installs, but it does read and write local agent files and can autonomously distill history if configured — review those behaviors before enabling in a sensitive environment.
This package appears to do what it says: count tokens, analyze skill metadata, distill conversation history, and optionally compress large documents. Before installing, review these points: - The tools read local skill files and conversation history (they look in ~/.openclaw, /app/skills, and other candidate paths). If your system prompt, skills, or chat logs contain sensitive data, be aware this skill will access them during audits. - Distillation writes JSON files to an episodic store (by default .openclaw/memory/episodic). Confirm that location and retention policy are acceptable for your data. - compress_prompt.py requires an external dependency (llmlingua) and is guarded OFFLINE_ONLY in the config; do not run compression against live system prompts or code (the manifest and skill_runner enforce this guardrail, but double-check in your deployment). - No credentials are requested, and there are no remote-download install steps in the manifest — still, verify the source of this skill bundle (owner and homepage are missing) before trusting it with production data. - If you plan to enable automated distillation (memory_agent.autonomous), test in a contained environment first so you understand when and where histories are archived or flushed. If you want higher assurance, ask the publisher for a provenance record or validate the full, untruncated source files locally before enabling the skill in an agent that handles sensitive content.
