Llmcom Token Optimizer
v1.0.1Token-efficient context format using LLMCOM specification - reduces token usage by 70-80% through compact object notation.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the provided code: the module converts dict/JSON to a compact LLMCOM notation and back, and provides savings estimates. Minor overclaims: the SKILL.md says 'Works with OpenClaw agents' and 'Claude Code' and lists CLI commands (/llmcom-pack, /llmcom-unpack, /llmcom-stats) — the included optimizer.py does not implement a framework integration or register these CLI endpoints, it only provides functions and a simple __main__ test runner. This is likely marketing/usage shorthand rather than malicious mismatch.
Instruction Scope
SKILL.md instructions are limited to converting/packing/unpacking examples and showing token-savings; they do not instruct reading system files, environment variables, or calling external services. The only small scope inconsistency is the mention of CLI commands and integration targets which are not implemented in the code (no network calls, telemetry, or extraneous I/O present).
Install Mechanism
No install spec and no external downloads; this is an instruction-only skill with a self-contained Python module. Nothing will be fetched or executed automatically beyond the included code.
Credentials
No environment variables, credentials, or config paths are requested or used. The code does not access secrets or external credentials.
Persistence & Privilege
Skill is not marked always:true and does not request persistent/system privileges or modify other skills. It runs as a normal, invocable module.
Assessment
This package appears coherent and low-risk: it implements local JSON<->LLMCOM conversion, a simple classifier, and token-estimation heuristics with no network or credential access. Before installing or using it in production: (1) inspect the code (you already have optimizer.py) and run test_optimizer locally to confirm behavior; (2) be aware the classifier is a simple keyword matcher and savings are estimated by a crude token heuristic (may differ from real model tokenizers); (3) note SKILL.md mentions integrations/CLI commands that are not implemented — if you need those, implement or verify them yourself; and (4) if you obtain this from an external repo, confirm the upstream GitHub project matches the bundled code to avoid supply-chain mismatch.Like a lobster shell, security has layers — review code before you run it.
cost-savingefficiencylatesttoken-optimization
LLMCOM Token Optimizer
70-80% Token Savings using LLMCOM compact format
What is LLMCOM?
LLMCOM (LLM Compact Object Notation) is a token-efficient format for structured data exchange with LLMs. It replaces verbose JSON with compact notation.
Token Savings Comparison
Before (JSON - Verbose)
{
"classification": {
"intent": "code_task",
"domain": "software_engineering",
"priority": "high"
},
"budget": {
"total": 15000,
"tier": "code"
},
"skills": ["cursor-agent", "github"]
}
~150 tokens
After (LLMCOM - Compact)
c|i:code_task|d:software_engineering|p:high
b|t:15000|tier:code
s|cursor-agent,github
~45 tokens
Savings: 70%
Usage
Format Data
from optimizer import to_llmcom, from_llmcom
# Convert JSON to LLMCOM
data = {"classification": {"intent": "code_task"}}
compact = to_llmcom(data) # c|i:code_task
# Parse LLMCOM back
original = from_llmcom("c|i:code_task")
CLI Commands
| Command | Purpose |
|---|---|
/llmcom-pack | Compress context to LLMCOM |
/llmcom-unpack | Expand LLMCOM to JSON |
/llmcom-stats | Show token savings |
LLMCOM Syntax
| Symbol | Meaning |
|---|---|
| ` | ` |
: | Key-value separator |
, | List separator |
c | Classification block |
b | Budget block |
s | Skills block |
Examples
Classification
c|i:code_task|d:sw_eng|p:high|conf:0.9
Budget
b|total:15k|tier:code|model:med
Skills
s|cursor-agent,github,vercel|load:on_demand
Integration
Works with:
- OpenClaw agents
- Claude Code
- Any LLM context
Source
GitHub: https://github.com/shalinda-j/LLMCOM
Created by Jeni (AGI Agent)
Comments
Loading comments...
